If this can help to help me, here is my question in a simpler way :
How can I use a my own audio processing/generating process within a AVAudioEngine?
All the examples I found online explained how to use AVAudioPlayer to play a file or a pre-computed buffer, or how to use built-ins AVAudioUnitEffect (Delay, Distortion, EQ, Reverb) to apply FX on mic input (and then output it though headphones or store it in file). But I can't find how to integrate my own process with its own render callback in this architecture...
Pease help !!!
Not sure if this is what you're asking but you can add a render callback on the last node that's pulled in the AVAudioEngine chain. The "correctness" of doing this is unclear but it does work. Just set the kAudioUnitProperty_SetRenderCallback property on component from the node you want to pass your audio into. That way you can generate your audio into the buffers in the callback. The documentation is all far too sparse but from my experience with it I think AVAudioEngine is really just a higher level / simpler method to build AUGraphs, underneath it still all runs at the component level. In AVAudioEngine most but not all nodes have an audioUnit property, it depends on what the node is. The mixer node for example doesn't but you (I) generally don't need a mixer and I think it's not even instantiated until you actually try and use it. ( It's created lazilly when you access the mainMixerNode property ).
Hope this helps. Anyone feel free to correct me because I'm still getting my head around all the latest changes too.
Thanks for this trick. As you say, the "correctness" of this seems rather unclear.
Moreover, since I asked this question, I finally understood something important : for audio processing apps on iOS, AVAudioEngine in its iOS 8.0 version seems rather limited. However, it makes a lot more sense with the AudioUnit v3 introduced with iOS 9.0, with which you can use custom AudioUnits on iOS, and therefore make huge practical advantage of AVAudioEngine architecture even if you have a lot of custom audio processing code.
As I always keep my apps retro-compatible for one iOS version, I'll explore that next year
Two years later, I still have the same problem.
I'vre recently heard that AUGraph will be deprecated in 2018, so I'm looking for a way to use my own code to generate/process manually audio data within the new AVAudioEngine framework.
Does anybody have a clue how we are supposed to do that?
Thank you all for your help
I'm not 100% sure what your goal is but you should be able to create AVAudioUnits which will be backed by AUAudioUnits. And in AUAudioUnit you can have your own render blocks. Additionally, as of iOS 11 you have manual rendering options in AVAudioEngine (https://developer.apple.com/videos/play/wwdc2017/501/)
I also heard the bit in a WWDC session about AUGraph being deprecated. I wanted a future-proof solution using AVAudioEngine instead. So I wrote a test app that instantiates an AUAudioUnit subclass with a callback block. The unit is then connected to AVAudioEngine to play the generated sound samples. Seems to work under iOS (device and Simulator), and with fairly low latency (small callback buffers). The source code for my test app is posted on github (search for hotpaw2). There's a hotpaw2 github gist on recording audio as well. Let me know if any of that helps, or if there is a better way to do this.