I'm trying to implement Ambisonic B-Format audio playback on Vision Pro with head tracking. So far audio plays, head tracking works, and the sound appears to be stereo. The problem is that it is not a proper binaural playback when compared to playing back the audiofile with a DAW. Has anyone successfully implemented B-Format playback on Vision Pro? Any suggestions on my current implementation:
func playAmbiAudioForum() async {
do {
try AVAudioSession.sharedInstance().setCategory(.playback)
try AVAudioSession.sharedInstance().setActive(true)
// AudioFile laoding/preperation
guard let testFileURL = Bundle.main.url(forResource: "audiofile", withExtension: "wav") else {
print("Test file not found")
return
}
let audioFile = try AVAudioFile(forReading: testFileURL)
let audioFileFormat = audioFile.fileFormat
// create AVAudioFormat with Ambisonics B Format
guard let layout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Ambisonic_B_Format) else {
print("layout failed")
return
}
let format = AVAudioFormat(
commonFormat: audioFile.processingFormat.commonFormat,
sampleRate: audioFile.fileFormat.sampleRate,
interleaved: false,
channelLayout: layout
)
// write audiofile to buffer
guard let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: UInt32(audioFile.length)) else {
print("buffer failed")
return
}
try audioFile.read(into: buffer)
playerNode.renderingAlgorithm = .HRTF
// connecting nodes
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.outputNode, format: format)
audioEngine.prepare()
playerNode.scheduleBuffer(buffer, at: nil) {
print("File finished playing")
}
try audioEngine.start()
playerNode.play()
} catch {
print("Setup error:", error)
}
}
AVAudioEngine
RSS for tagUse a group of connected audio node objects to generate and process audio signals and perform audio input and output.
Posts under AVAudioEngine tag
55 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I followed this guide, and added com.apple.developer.spatial-audio.profile-access as an entitlement to the app (via the + Capability button – Spatial Audio Profile). I have a audio graph that outputs to AVAudioEngine.
However, the Xcode Cloud build ended up with this error:
Invalid Code Signing Entitlements. Your application bundle's signature contains code signing entitlements that are not supported on iOS. Specifically, key 'com.apple.developer.spatial-audio.profile-access' in 'Payload/…' is not supported.
This guide says it's available on iOS. Does it mean not on iOS 17? In which case how can I provide fallback for iOS 17?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin.
The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds.
When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating.
I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance!
P.S. I’ve already enabled background audio permissions.
The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
Hi community,
I'm wondering how can I request the permission of "System Audio Recording Only" under the Privacy & Security -> Screen & System Audio Recording via swift?
Did a bunch of search but didn't find good documentation on it.
Tried another approach here https://github.com/insidegui/AudioCap/blob/main/AudioCap/ProcessTap/AudioRecordingPermission.swift which doesn't work very reliably.
I'm experiencing audio issues while developing for visionOS when playing PCM data through AVAudioPlayerNode.
Issue Description:
Occasionally, the speaker produces loud popping sounds or distorted noise
This occurs during PCM audio playback using AVAudioPlayerNode
The issue is intermittent and doesn't happen every time
Technical Details:
Platform: visionOS
Device: vision pro / simulator
Audio Framework: AVFoundation
Audio Node: AVAudioPlayerNode
Audio Format: PCM
I would appreciate any insights on:
Common causes of audio distortion with AVAudioPlayerNode
Recommended best practices for handling PCM playback in visionOS
Potential configuration issues that might cause this behavior
Has anyone encountered similar issues or found solutions? Any guidance would be greatly helpful.
Thank you in advance!
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call.
I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides.
However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle.
In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it.
Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit).
Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides.
Thank you!
Hi!
I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3.
I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
Hi all,
I am working on an app where I have live prompts playing, in addition to a voice channel that sometimes becomes active. Right now I am using two different AVAudioSession Configurations so what we only switch to a mic enabled mode when we actually need input from the mic. These are defined below.
When just using the device hardware, everything works as expected and the modes change and the playback continues as needed. However when using bluetooth devices such as AirPods where the switch from AD2P to HFP is needed, I am getting a AVAudioEngineConfigurationChange notification. In response I am tearing down the engine and creating a new one with the same 2 player nodes. This does work fine and there are no crashes, except all the audio I have scheduled on a player node has now been cleared. All the completion blocks marked with ".dataPlayedBack" return the second this event happens, and leaves me in a state where I now have a valid engine setup again but have no idea what actually played, or was errantly marked as such.
Is this the expected behavior when getting a configuration change notification?
Adding some information below to my audio graph for context:
All my parts of the graph, I disconnect when getting this event and do the same to the new engine
private var inputEngine: AVAudioEngine
private var audioEngine: AVAudioEngine
private let voicePlayerNode: AVAudioPlayerNode
private let promptPlayerNode: AVAudioPlayerNode
audioEngine.attach(voicePlayerNode)
audioEngine.attach(promptPlayerNode)
audioEngine.connect(
voicePlayerNode,
to: audioEngine.mainMixerNode,
format: voiceNodeFormat
)
audioEngine.connect(
promptPlayerNode,
to: audioEngine.mainMixerNode,
format: nil
)
An example of how I am scheduling playback, and where that completion is firing even if it didn't actually play.
private func scheduleVoicePlayback(_ id: AudioPlaybackSample.Id, buffer: AVAudioPCMBuffer) async throws {
guard !voicePlayerQueue.samples.contains(where: { $0 == id }) else {
return
}
seprateQueue.append(buffer)
if !isVoicePlaying {
activateAudioSession()
}
voicePlayerQueue.samples.append(id)
if !voicePlayerNode.isPlaying {
voicePlayerNode.play()
}
if let convertedBuffer = buffer.convert(to: voiceNodeFormat) {
await voicePlayerNode.scheduleBuffer(convertedBuffer, completionCallbackType: .dataPlayedBack)
} else {
throw AudioPlaybackError.failedToConvert
}
voiceSampleHasBeenPlayed(id)
}
And lastly my audio session configuration if its useful.
extension AVAudioSession {
static func setDefaultCategory() {
do {
try sharedInstance().setCategory(
.playback,
options: [
.duckOthers, .interruptSpokenAudioAndMixWithOthers
]
)
} catch {
print("Failed to set default category? \(error.localizedDescription)")
}
}
static func setVoiceChatCategory() {
do {
try sharedInstance().setCategory(
.playAndRecord,
options: [
.defaultToSpeaker,
.allowBluetooth,
.allowBluetoothA2DP,
.duckOthers,
.interruptSpokenAudioAndMixWithOthers
]
)
} catch {
print("Failed to set category? \(error.localizedDescription)")
}
}
}
I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback.
I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it.
Please help me resolve this issue. I have attached the code reference that I am currently using.
Thank you.
ViewController.swift
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap.
But I can't figure out how to connect them back to a stereo node.
I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units.
Any ideas?
let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)!
let leftInputMixer = AVAudioMixerNode()
let rightInputMixer = AVAudioMixerNode()
let leftOutputMixer = AVAudioMixerNode()
let rightOutputMixer = AVAudioMixerNode()
let channelMixer = AVAudioMixerNode()
[leftInputMixer, rightInputMixer, leftOutputMixer,
rightOutputMixer, channelMixer].forEach { engine.attach($0) }
let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0)
let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0)
plugin.leftInputMixer = leftInputMixer
plugin.rightInputMixer = rightInputMixer
plugin.leftOutputMixer = leftOutputMixer
plugin.rightOutputMixer = rightOutputMixer
plugin.channelMixer = channelMixer
leftInputMixer.auAudioUnit.channelMap = [0]
rightInputMixer.auAudioUnit.channelMap = [1]
engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat)
// Process right channel, pass through left channel
engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat)
engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat)
engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat)
// Mix back to stereo?
engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat)
engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples.
I noticed when calling
audioPlayerNode.scheduleBuffer(audioBuffer)
The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot.
However, if I call
audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts)
The memory leak issue is gone, but the audio is broken (sounds like been shortened).
Below is the full code snippet, anyone knows how to fix it?
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// callback every frame
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
// memory leak here
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The source audio buffer has 2 channels and is in a Float32 interleaved format.
However, when setting up the AVAudioFormat with interleaved to true, the app will crash with a memory issue:
AURemoteIO::IOThread (35): EXC_BAD_ACCESS (code=1, address=0x3)
But if I set AVAudioFormat with interleaved to false, and manually set up the AVAudioPCMBuffer, it can play audio as expected.
Could you please help me fix it? Below is the code snippet.
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// This crashes
private func audioFrameCallback_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: true),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData?[0] {
data.update(from: buf, count: samples * Int(format.channelCount))
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
/// This works
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
Hello.
We are trying to get audio volume from microphone.
We have 2 questions.
1. Can anyone tell me about AVAudioEngine.InputNode.volume?
AVAudioEngine.InputNode.volume
Return 0 in the silence, Return float type value within 1.0 depending on the
volume are expected work, but it looks 1.0 (default value) is returned at any time.
Which case does it return 0.5 or 0?
Sample code is below. Microphone works correctly.
// instance member
private var engine: AVAudioEngine!
private var node: AVAudioInputNode!
// start method
self.engine = .init()
self.node = engine.inputNode
engine.prepare()
try! engine.start()
// volume getter
print(\(self.node.volume))
2. What is the best practice to get audio volume from microphone?
Requirements are:
Without AVAudioRecorder. We use it for streaming audio.
it should withstand high frequency access.
Testing info
device: iPhone XR
OS version: iOS 18
Best Regards.
The new iPhone 16 supports spatial audio recordings in the camera app when recording videos. Is it possible to also record spatial audio without video, and is it possible for 3rd party developers to do so? If so, how do I need to configure AVAudioSession and/or AVAudioEngine to record spatial audio in my audio recording app on iPhone 16?
Hi community,
I'm trying to setup an AVAudioFormat with AVAudioPCMFormatInt16. But, i've an error :
AVAEInternal.h:125 [AUInterface.mm:539:SetFormat: ([[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr])] returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 "(null)"
If i understand the error code 10868, the format is not correct. But, how i can use PCM Int16 format ? Here is my method :
- (void)setupAudioDecoder:(double)sampleRate audioChannels:(double)audioChannels {
if (self.isRunning) {
return;
}
self.audioEngine = [[AVAudioEngine alloc] init];
self.audioPlayerNode = [[AVAudioPlayerNode alloc] init];
[self.audioEngine attachNode:self.audioPlayerNode];
AVAudioChannelCount channelCount = (AVAudioChannelCount)audioChannels;
self.audioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16
sampleRate:sampleRate
channels:channelCount
interleaved:YES];
NSLog(@"Audio Format: %@", self.audioFormat);
NSLog(@"Audio Player Node: %@", self.audioPlayerNode);
NSLog(@"Audio Engine: %@", self.audioEngine);
// Error on this line
[self.audioEngine connect:self.audioPlayerNode to:self.audioEngine.mainMixerNode format:self.audioFormat];
/**NSError *error = nil;
if (![self.audioEngine startAndReturnError:&error]) {
NSLog(@"Erreur lors de l'initialisation du moteur audio: %@", error);
return;
}
[self.audioPlayerNode play];
self.isRunning = YES;*/
}
Also, i see the audioEngine seem not running ?
Audio Engine:
________ GraphDescription ________
AVAudioEngineGraph 0x600003d55fe0: initialized = 0, running = 0, number of nodes = 1
Anyone have already use this format with AVAudioFormat ?
Thank you !
Here is the demo from Apple's site
This issues is specific to iOS 18.
When running this demo, we are getting new text when we have a gap in speaking, the recognitionTask(with:resultHandler:) provides new text which is only spoken after the gap and not the concatenation of old text and the new spoken text.
Hi!
I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents.
If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
AVAudioEngine and AVAudioSession
Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**).
Why? I want to make those who run into this issue have to possibility to find this post through Search Engines!
This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details.
If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post!
Is it possible to use AVAudioEngine and AVAudioSession together?
The answer is yes.
But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing.
Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing
This example Project by Apple, uses both AVAudioEngine and AVAudioSession.
How can I fix AVAudioEngineImpl::Initialize(NSError**) ?
I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either.
I will mention common cases that I encountered though.
inputNode
https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode
You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash.
The audio engine creates a singleton on demand when first accessing this variable.
Doing this has prevented this common issue for me.
.prepare deallocates and can cause a crash if you restart your AudioEngine
Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it.
Here is a common thing to note.
If you had previous initialized inputNode. Those could be gone after using .prepare.
You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need.
The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called.
AVAudioSession's setCategory
You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up.
You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail.
Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not.
try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers])
Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues.
Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically.
Short Summary
If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT)
Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it.
AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes.
If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more!
Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice.
I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it.
A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one.
The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
If I have bluetooth speaker connected and I have installTap called on input Node, the callback is fired for 1-2 seconds then it doesnt anymore. I dont see any route or any notification handler called in between.
engine.inputNode.removeTap(onBus: 0)
engine.inputNode.installTap(
onBus: 0,
bufferSize: 4096,
format: format
) { buffer, _ in
// 3
guard let channelData = buffer.floatChannelData else {
return
}
// This callback fails after some time.
}
Not sure if this is expected, but I noticed some other applications, they seem to work fine.
If I remove bluetooth device, my input works fine.
Also I have no issues with output on Speaker.