AVAudioNode

RSS for tag

Use the AVAudioNode abstract class for audio generation, processing, or I/O block.

Posts under AVAudioNode tag

23 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Why is AVAudioEngine input giving all zero samples?
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file. Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong? I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone. class AudioRecorder { private let audioEngine = AVAudioEngine() private var fileHandle: FileHandle? func startRecording() { let inputNode = audioEngine.inputNode let audioFormat: AVAudioFormat #if os(iOS) let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)! #elseif os(macOS) audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format #endif setupTextFile() inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in self!.processAudioBuffer(buffer: buffer) } do { try audioEngine.start() print("Recording started with format: \(audioFormat)") } catch { print("Failed to start audio engine: \(error.localizedDescription)") } } func stopRecording() { audioEngine.stop() audioEngine.inputNode.removeTap(onBus: 0) print("Recording stopped.") } private func setupTextFile() { let tempDir = FileManager.default.temporaryDirectory let textFileURL = tempDir.appendingPathComponent("audioData.txt") FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil) fileHandle = try? FileHandle(forWritingTo: textFileURL) } private func processAudioBuffer(buffer: AVAudioPCMBuffer) { guard let channelData = buffer.floatChannelData else { return } let channelSamples = channelData[0] let frameLength = Int(buffer.frameLength) var textData = "" var allZero = true for i in 0..<frameLength { let sample = channelSamples[i] if sample != 0 { allZero = false } textData += "\(sample)\n" } if allZero { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.") } else { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.") } // Write to file if let data = textData.data(using: .utf8) { fileHandle!.write(data) } } }
2
0
111
1d
aumi AUv3 with AvAudioEngine ConnectMIDI multiple
Hi! I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3. I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
3
0
202
1w
AVAudioEngineConfigurationChange Clearing AVPlayerNode
Hi all, I am working on an app where I have live prompts playing, in addition to a voice channel that sometimes becomes active. Right now I am using two different AVAudioSession Configurations so what we only switch to a mic enabled mode when we actually need input from the mic. These are defined below. When just using the device hardware, everything works as expected and the modes change and the playback continues as needed. However when using bluetooth devices such as AirPods where the switch from AD2P to HFP is needed, I am getting a AVAudioEngineConfigurationChange notification. In response I am tearing down the engine and creating a new one with the same 2 player nodes. This does work fine and there are no crashes, except all the audio I have scheduled on a player node has now been cleared. All the completion blocks marked with ".dataPlayedBack" return the second this event happens, and leaves me in a state where I now have a valid engine setup again but have no idea what actually played, or was errantly marked as such. Is this the expected behavior when getting a configuration change notification? Adding some information below to my audio graph for context: All my parts of the graph, I disconnect when getting this event and do the same to the new engine private var inputEngine: AVAudioEngine private var audioEngine: AVAudioEngine private let voicePlayerNode: AVAudioPlayerNode private let promptPlayerNode: AVAudioPlayerNode audioEngine.attach(voicePlayerNode) audioEngine.attach(promptPlayerNode) audioEngine.connect( voicePlayerNode, to: audioEngine.mainMixerNode, format: voiceNodeFormat ) audioEngine.connect( promptPlayerNode, to: audioEngine.mainMixerNode, format: nil ) An example of how I am scheduling playback, and where that completion is firing even if it didn't actually play. private func scheduleVoicePlayback(_ id: AudioPlaybackSample.Id, buffer: AVAudioPCMBuffer) async throws { guard !voicePlayerQueue.samples.contains(where: { $0 == id }) else { return } seprateQueue.append(buffer) if !isVoicePlaying { activateAudioSession() } voicePlayerQueue.samples.append(id) if !voicePlayerNode.isPlaying { voicePlayerNode.play() } if let convertedBuffer = buffer.convert(to: voiceNodeFormat) { await voicePlayerNode.scheduleBuffer(convertedBuffer, completionCallbackType: .dataPlayedBack) } else { throw AudioPlaybackError.failedToConvert } voiceSampleHasBeenPlayed(id) } And lastly my audio session configuration if its useful. extension AVAudioSession { static func setDefaultCategory() { do { try sharedInstance().setCategory( .playback, options: [ .duckOthers, .interruptSpokenAudioAndMixWithOthers ] ) } catch { print("Failed to set default category? \(error.localizedDescription)") } } static func setVoiceChatCategory() { do { try sharedInstance().setCategory( .playAndRecord, options: [ .defaultToSpeaker, .allowBluetooth, .allowBluetoothA2DP, .duckOthers, .interruptSpokenAudioAndMixWithOthers ] ) } catch { print("Failed to set category? \(error.localizedDescription)") } } }
1
0
235
2w
iOS Audio Crackling issue when send audio data to UDP server and Play
I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback. I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it. Please help me resolve this issue. I have attached the code reference that I am currently using. Thank you. ViewController.swift
0
0
221
Nov ’24
Connect 2 mono nodes as L/R input for a stereo node
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap. But I can't figure out how to connect them back to a stereo node. I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units. Any ideas? let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)! let leftInputMixer = AVAudioMixerNode() let rightInputMixer = AVAudioMixerNode() let leftOutputMixer = AVAudioMixerNode() let rightOutputMixer = AVAudioMixerNode() let channelMixer = AVAudioMixerNode() [leftInputMixer, rightInputMixer, leftOutputMixer, rightOutputMixer, channelMixer].forEach { engine.attach($0) } let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0) let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0) plugin.leftInputMixer = leftInputMixer plugin.rightInputMixer = rightInputMixer plugin.leftOutputMixer = leftOutputMixer plugin.rightOutputMixer = rightOutputMixer plugin.channelMixer = channelMixer leftInputMixer.auAudioUnit.channelMap = [0] rightInputMixer.auAudioUnit.channelMap = [1] engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat) // Process right channel, pass through left channel engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat) engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat) engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat) // Mix back to stereo? engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat) engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
1
0
223
Nov ’24
Issues with Downsampling Live Audio from Mic with AVAudioNodeMixer
I’m working on a memo app that records audio from the iPhone’s microphone (and other devices like MacBook or iPad) and processes it in 10-second chunks at a target sample rate of 16 kHz. However, I’ve encountered limitations with installTap in AVAudioEngine, which doesn’t natively support configuring a target sample rate on the mic input (the default being 44.1 kHz). To address this, I tried using AVAudioMixerNode to downsample the mic input directly. Although everything seems correctly configured, no audio is recorded—just a flat signal with zero levels. There are no errors, and all permissions are granted, so it seems like an issue with downsampling rather than the mic setup itself. To make progress, I implemented a workaround by tapping and resampling each chunk tapped using installTap (every 50ms in my case) with AVAudioConverter. While this works, it can introduce artifacts at the beginning and end of each chunk, likely due to separate processing instead of continuous downsampling. Here are the key issues and questions I have: 1. Can we change the mic input sample rate directly using AVAudioSession or another native API in AVAudio? Setting up the desired sample rate initially would be ideal for my use case. 2. Are there alternatives to installTap for recording audio at a different sample rate or for continuously downsampling the live input without chunk-based artifacts? This issue seems longstanding, as noted in a 2018 forum post: https://forums.developer.apple.com/forums/thread/111726 Any guidance on configuring or processing mic input at a lower sample rate in real-time would be greatly appreciated. Thank you!
0
0
195
Nov ’24
AVAudioPlayerNode scheduleBuffer leaks memory
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples. I noticed when calling audioPlayerNode.scheduleBuffer(audioBuffer) The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot. However, if I call audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts) The memory leak issue is gone, but the audio is broken (sounds like been shortened). Below is the full code snippet, anyone knows how to fix it? @Observable final class MyAudioPlayer { private var audioEngine: AVAudioEngine = .init() private var audioPlayerNode: AVAudioPlayerNode = .init() private var audioFormat: AVAudioFormat? init() { audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil) try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try? AVAudioSession.sharedInstance().setActive(true) audioEngine.prepare() try? audioEngine.start() audioPlayerNode.play() } // more code... /// callback every frame private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) { guard let buf, let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false), let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples)) else { return } audioBuffer.frameLength = AVAudioFrameCount(samples) if let data = audioBuffer.floatChannelData { for channel in 0 ..< Int(format.channelCount) { for frame in 0 ..< Int(audioBuffer.frameLength) { data[channel][frame] = buf[frame * Int(format.channelCount) + channel] } } } // memory leak here audioPlayerNode.scheduleBuffer(audioBuffer) } }
1
0
273
Nov ’24
AVAudioPlayerNode can't play interleaved AVAudioPCMBuffer
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The source audio buffer has 2 channels and is in a Float32 interleaved format. However, when setting up the AVAudioFormat with interleaved to true, the app will crash with a memory issue: AURemoteIO::IOThread (35): EXC_BAD_ACCESS (code=1, address=0x3) But if I set AVAudioFormat with interleaved to false, and manually set up the AVAudioPCMBuffer, it can play audio as expected. Could you please help me fix it? Below is the code snippet. @Observable final class MyAudioPlayer { private var audioEngine: AVAudioEngine = .init() private var audioPlayerNode: AVAudioPlayerNode = .init() private var audioFormat: AVAudioFormat? init() { audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil) try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try? AVAudioSession.sharedInstance().setActive(true) audioEngine.prepare() try? audioEngine.start() audioPlayerNode.play() } // more code... /// This crashes private func audioFrameCallback_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) { guard let buf, let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: true), let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples)) else { return } audioBuffer.frameLength = AVAudioFrameCount(samples) if let data = audioBuffer.floatChannelData?[0] { data.update(from: buf, count: samples * Int(format.channelCount)) } audioPlayerNode.scheduleBuffer(audioBuffer) } /// This works private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) { guard let buf, let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: false), let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples)) else { return } audioBuffer.frameLength = AVAudioFrameCount(samples) if let data = audioBuffer.floatChannelData { for channel in 0 ..< Int(format.channelCount) { for frame in 0 ..< Int(audioBuffer.frameLength) { data[channel][frame] = buf[frame * Int(format.channelCount) + channel] } } } audioPlayerNode.scheduleBuffer(audioBuffer) } }
1
0
262
Nov ’24
Get audio volume from microphone
Hello. We are trying to get audio volume from microphone. We have 2 questions. 1. Can anyone tell me about AVAudioEngine.InputNode.volume? AVAudioEngine.InputNode.volume Return 0 in the silence, Return float type value within 1.0 depending on the volume are expected work, but it looks 1.0 (default value) is returned at any time. Which case does it return 0.5 or 0? Sample code is below. Microphone works correctly. // instance member private var engine: AVAudioEngine! private var node: AVAudioInputNode! // start method self.engine = .init() self.node = engine.inputNode engine.prepare() try! engine.start() // volume getter print(\(self.node.volume)) 2. What is the best practice to get audio volume from microphone? Requirements are: Without AVAudioRecorder. We use it for streaming audio. it should withstand high frequency access. Testing info device: iPhone XR OS version: iOS 18 Best Regards.
2
0
397
Oct ’24
Watch OS11 My recording play gets paused after a while
Watch OS11 My recording play gets paused when watch I turned down. It was not happening in previous versions. In my app I recorded my recording. And When I play it in my app, it was playing good in debug mode(when Xcode is connected) could not debug. Otherwise, it was automatically paused(when my wrist is down or inactivity time is elapsed) I want it to be continued.
1
0
370
Sep ’24
How do I output different sounds to headphones and speakers while simultaneously recording, all without using AVAudioSession.Category.multiroute?
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones). I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift. To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc. Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation. tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
1
0
512
Aug ’24
Issue with AVAudioEngine and AVAudioSession after Interruption and Background Transition - 561145187 error code
Description: I am developing a recording-only application that supports background recording using AVAudioEngine. The app segments the recording into 60-second files for further processing. For example, a 10-minute recording results in ten 60-second files. Problem: The application functions as expected in the background. However, after the app receives an interruption (such as a phone call) and the interruption ends, I can successfully restart the recording. The problem arises when the app then transitions to the background; it fails to restart the recording. Specifically, after ending the call and transitioning the app to the background, the app encounters an error and is unable to restart AVAudioSession and AVAudioEngine. The only resolution is to close and restart the app, which is not ideal for user experience. Steps to Reproduce: 1. Start recording using AVAudioEngine. 2. The app records and saves 60-second segments. 3. Receive an interruption (e.g., an incoming phone call). 4. End the call. 5. Transition the app to the background. 6. Transition the app to the foreground and the session will be activated again. 7. Attempt to restart the recording. Expected Behavior: The app should resume recording seamlessly after the interruption and background transition. Actual Behavior: The app fails to restart AVAudioSession and AVAudioEngine, resulting in a continuous error. The recording cannot be resumed without closing and reopening the app. How I’m Starting the Recording: Configuration: internal func setAudioSessionCategory() { do { try audioSession.setCategory( .playAndRecord, mode: .default, options: [.defaultToSpeaker, .mixWithOthers, .allowBluetooth] ) } catch { debugPrint(error) } } internal func setAudioSessionActivation() { if UIApplication.shared.applicationState == .active { do { try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) if audioSession.isInputGainSettable { try audioSession.setInputGain(1.0) } try audioSession.setPreferredIOBufferDuration(0.01) try setBuiltInPreferredInput() } catch { debugPrint(error) } } } Starting AVAudioEngine: internal func setupEngine() { if callObserver.onCall() { return } inputNode = audioEngine.inputNode audioEngine.attach(audioMixer) audioEngine.connect(inputNode, to: audioMixer, format: AVAudioFormat.validInputAudioFormat(inputNode)) } internal func beginRecordingEngine() { audioMixer.removeTap(onBus: 0) audioMixer.installTap(onBus: 0, bufferSize: 1024, format: AVAudioFormat.validInputAudioFormat(inputNode)) { [weak self] buffer, _ in guard let self = self, let file = self.audioFile else { return } write(file, buffer: buffer) } audioEngine.prepare() do { try audioEngine.start() recordingTimer = Timer.scheduledTimer(withTimeInterval: recordingInterval, repeats: true) { [weak self] _ in self?.handleRecordingInterval() } } catch { debugPrint(error) } } On the try audioEngine.start() call, I receive error code 561145187 in the catch block. Logs/Error Messages: • Error code: 561145187 Request: I would appreciate any guidance or solutions to ensure the app can resume recording after interruptions and background transitions without requiring a restart. Thank you for your assistance.
2
0
729
Aug ’24
Implementing Multi-Channel Audio Recording on iOS with Built-In and External Mics
Hi there community, First and foremost, a big thank you to everyone who takes the time to read this. TL;DR: How, if even possible, can I record multiple audio streams simultaneously on an iOS application (iPad/iPhone)? I'm working on a recorder for the iPad to gather data for a machine learning project focused on speech recognition. Our goal is to capture extensive speech data, which requires recording from multiple microphones. Specifically, I need to record from all mics connected to our Scarlett 4i4 audio interface and, most importantly, also record from the built-in mic on the iPad or iPhone at the same time. As a newcomer to Swift development, I initially explored AVAudioRecorder. However, I quickly realized that it only supports one active audio node at a time, making multi-channel recording impossible. (perhaps you can proof me wrong, would make my day) Next, I transitioned to using AVAudioEngine, but encountered the same limitation: I couldn't manage to get input nodes for both the built-in mic and the Scarlett interface channels simultaneously. The application started behaving oddly, often resulting in identical audio data being recorded across all files. Determined to find a solution, I delved deeper into the Core Audio framework, specifically using Audio Toolbox. My approach involved creating and configuring multiple Audio Units, each corresponding to a different audio input device. Here's a brief overview of my current implementation: Listing Available Input Devices: I used AVAudioSession to enumerate all available input devices. Creating Audio Units: For each device, I created an Audio Unit and attempted to configure it for recording. Setting Up Callbacks: I set up input and output callbacks to handle the audio processing. Despite my efforts over the last few days, I haven't had much success. The callbacks for the Audio Units don't seem to be invoked correctly, and I'm struggling to achieve simultaneous multi-channel recording. Below is a snippet of my latest attempt: let audioUnitCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Input callback invoked") let audioUnit = inRefCon.assumingMemoryBound(to: AudioUnit.self).pointee var bufferList = AudioBufferList( mNumberBuffers: 1, mBuffers: AudioBuffer( mNumberChannels: 1, mDataByteSize: 0, mData: nil ) ) let status = AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList) if status != noErr { print("AudioUnitRender failed: \(status)") return status } // Copy rendered data to output buffer let buffer = UnsafeMutableAudioBufferListPointer(ioData)[0] buffer.mData?.copyMemory(from: bufferList.mBuffers.mData!, byteCount: Int(bufferList.mBuffers.mDataByteSize)) buffer.mDataByteSize = bufferList.mBuffers.mDataByteSize print("Rendered audio data") return noErr } let outputCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Output callback invoked") // Process the output data if needed return noErr } In essence, I'm stuck and in need of guidance. Has anyone here successfully implemented multi-channel recording on iOS, especially involving both built-in microphones and external audio interfaces? Any shared experiences, insights, or suggestions on how to proceed would be immensely appreciated. Thank you once again for your time and assistance!
0
0
567
Jul ’24
Help Needed with AVAudioSession in Unity for Consistent Sound Output on iOS Devices
Hello, I hope this message finds you well. I am currently working on a Unity-based iOS application that requires continuous microphone input while also producing sound outputs. For this we need to use iOS echo cancellation, so some sounds need to be played via the iOS layer w/ echo cancellation, I am manually setting up the Audio Session after the app starts. Using the .playAndRecord mode of AVAudioSession. However, I am facing an issue where the volume of the sound output is inconsistent across different iOS devices and scenarios. The process is quite simple, for each AudioClip we are about to play via unity, we copy the buffer data to our iOS Swift layer, which then does all the processing then plays the audio via the native layer. Here are the specific issues I am encountering: The volume level for the game sound effects fluctuate between a normal audible volume and a very low volume. The sound output behaves differently depending on whether the app is launched with the device at full volume or on mute, and if the app is put into background and in foreground afterwards. The volume inconsistency affects my game negatively, as it is very hard to hear some audios, regardless of the device or its initial volume state. I have followed the basic setup for AVAudioSession as per the documentation, but the inconsistencies persist. I'm also aware that Unity uses FMOD to set up the audio routing in iOS, we configure our custom routing after that. We tried tweaking the output volume prior to playing an audio so there isn't much discrepancy, this seems to align the output volume, however there is still some places where the volume is super low, I've looked into the waveforms in Unity and they all seem consistent, there is no reason why the volume would take a dip. private var audioPlayer = AVAudioPlayerNode() @objc public func Play() { audioPlayer.volume = AVAudioSession.sharedInstance().outputVolume * 0.25 audioPlayer.play() } We also explored changing the audio session options to see if we had any luck but unfortunately nothing has changed. private func ConfigAudioSession() { let audioSession = AVAudioSession.sharedInstance(); do { try audioSession.setCategory(.playAndRecord, options: [.mixWithOthers, .allowBluetooth, .defaultToSpeaker]); try audioSession.setMode(.spokenAudio) try audioSession.setActive(true); } catch { //Treat error } } Could anyone provide guidance or suggest best practices to ensure a stable and consistent volume output in this scenario? Any advice on this issue would be greatly appreciated. Thank you in advance for your help!
0
0
642
Jul ’24
Device Volume Changes After Setting AVAudioSession Category
Hi there, I am encountering an issue in my project which utilizes a speech recognizer and occasionally plays audio files. The problem arises when I configure the AVAudioSession and enable voice processing. The system volume changes unexpectedly and becomes uncontrollable. Specifically, the volume is excessively loud on iPhone but quite low on iPad let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth, .interruptSpokenAudioAndMixWithOthers]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) try audioEngine.inputNode.setVoiceProcessingEnabled(true) try audioEngine.outputNode.setVoiceProcessingEnabled(true) I have provided a sample project here: Sample Project. To reproduce the issue, please follow these steps on a real device: Click on "Play recording" to hear the sound at normal volume. Click on "Start recording" to set up the category and speech recognizer. Click on "Stop recording" to stop the recording. Click on "Play recording" again and observe that the sound volume has changed. Thank you for your assistance.
0
0
644
Jun ’24
Understanding AVAudioTime in AVAudioNodeTapBlock? Is there a way to get time relative to a scheduled Buffer?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
3
0
882
1w
How to Add .sf2 Instruments to Multiple Channels Using AVFAudio?
Hello everyone, I'm relatively new to iOS development, and I'm currently working on a Flutter plugin package. I want to use the AVFAudio package to load instrument sounds from an SF2 file into different channels. Specifically, I'd like to load individual instruments from the SF2 file onto separate channels. However, I've been struggling to find a way to achieve this. Could someone guide me on how to load SF2 instrument sounds into different channels using AVFAudio? I've tried various combinations of parameters (program number, soundbank MSB, and soundbank LSB), but none seem to work. If anyone has experience with AVFAudio and SF2 files, I'd greatly appreciate your help. Perhaps there's a proven approach or a way to determine the correct values for these parameters? Should I use a soundfont editor to inspect specific values within the SF2 file? Thank you in advance for any assistance! Best regards, Melih
1
0
918
Mar ’24
USB audio multi route input - AVAudioEngine inputNode
Hi everybody, I'm trying to use the multi input of an usb device using the AVAudioEngine. My aim is to connect different inputNode channels to 2 or more different audionode (f.e. mixer). I'm able to get a spefic input channel from the engine inputNode with OSStatus err = AudioUnitSetProperty(avEngine.inputNode.audioUnit, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Output, 1, outputChannelMap, propSize); but this will change the routing to all the input node and to all the destination mixer nodes. How to send channel 1 of inputNode to a mixerNode1 and channel 2 to another mixerNode2?
0
0
743
Feb ’24
Is AVAudioPCMFormatFloat32 required for playing a buffer with AVAudioEngine / AVAudioPlayerNode
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
0
0
750
Feb ’24