9 Replies
      Latest reply: Sep 14, 2017 4:14 PM by hotpaw2 RSS
      thomas.hezard Level 1 Level 1 (0 points)

        Hello all,


        I'm currently trying to integrate AVAudioEngine into my apps, but I'm stuck to what seems to be the most basic question!


        Here is my problem :


        I have a huge sound processing and generating program, with a mixed C/Objective-C core code. Until now I've been using AVAudioSession (initiated with a graph, a mixer unit, prefered and maximum number frames etc.), and I set a render callback with AUGraphSetNodeInputCallback, which links to my core code, sends audio input to it if needed, and fills up the AudioBufferList for the hardware output.


        I now want to transfer all the graph management from AVAudioSession to AudioEngine, in order to simplify audio conversion, and implement new features with the new AVAudioNode system. But for this, I need to plug my render callback somewhere. If I understood well what was presented in the WWDC videos about CoreAudio and AVAudioEngine (WWDC 2014 sessions 501 & 502, WWDC 2015 sessions 507 and 508), I need to output my generated audio into an AVAudioPCMBuffer and read it with a AVAudioPlayerNode. My question is now :

        Where and how do I have to plug my render callback function to fill the AVAudioPCMBuffer within my core code ???


        I've been searching about this for hours on the web, and all I found is how to use AVAudioPlayerNode to play a file, or how to read an AVAudioPCMBuffer generated in advance, but I can't find how to fill an AVAudioPCMBuffer with a custom render callback, and read it with a AVAudioPlayerNode synchronously and in real-time inside an AVAudioEngine.


        Any advice?


        Thank you all



        • Re: AVAudioEngine & render callback
          thomas.hezard Level 1 Level 1 (0 points)

          Dear all,




          If this can help to help me, here is my question in a simpler way :


          How can I use a my own audio processing/generating process within a AVAudioEngine?


          All the examples I found online explained how to use AVAudioPlayer to play a file or a pre-computed buffer, or how to use built-ins AVAudioUnitEffect (Delay, Distortion, EQ, Reverb) to apply FX on mic input (and then output it though headphones or store it in file). But I can't find how to integrate my own process with its own render callback in this architecture...


          Pease help !!!





            • Re: AVAudioEngine & render callback
              ParanoidAndroid Level 1 Level 1 (0 points)

              Not sure if this is what you're asking but you can add a render callback on the last node that's pulled in the AVAudioEngine chain. The "correctness" of doing this is unclear but it does work. Just set the kAudioUnitProperty_SetRenderCallback property on component from the node you want to pass your audio into. That way you can generate your audio into the buffers in the callback. The documentation is all far too sparse but from my experience with it I think AVAudioEngine is  really just a higher level / simpler method to build AUGraphs, underneath it still all runs at the component level. In AVAudioEngine most but not all nodes have an audioUnit property, it depends on what the node is. The mixer node for example doesn't but you (I) generally don't need a mixer and I think it's not even instantiated until you actually try and use it. ( It's created lazilly when you access the mainMixerNode property ).


              Hope this helps.  Anyone feel free to correct me because I'm still getting my head around all the latest changes too.

                • Re: AVAudioEngine & render callback
                  thomas.hezard Level 1 Level 1 (0 points)



                  Thanks for this trick. As you say, the "correctness" of this seems rather unclear.


                  Moreover, since I asked this question, I finally understood something important : for audio processing apps on iOS, AVAudioEngine in its iOS 8.0 version seems rather limited. However, it makes a lot more sense with the AudioUnit v3 introduced with iOS 9.0, with which you can use custom AudioUnits on iOS, and therefore make huge practical advantage of AVAudioEngine architecture even if you have a lot of custom audio processing code.


                  As I always keep my apps retro-compatible for one iOS version, I'll explore that next year

                    • Re: AVAudioEngine & render callback
                      thomas.hezard Level 1 Level 1 (0 points)

                      Hi all,


                      Two years later, I still have the same problem.


                      I'vre recently heard that AUGraph will be deprecated in 2018, so I'm looking for a way to use my own code to generate/process manually audio data within the new AVAudioEngine framework.


                      Does anybody have a clue how we are supposed to do that?


                      Thank you all for your help



                        • Re: AVAudioEngine & render callback
                          NikoloziApps Level 2 Level 2 (85 points)

                          I'm not 100% sure what your goal is but you should be able to create AVAudioUnits which will be backed by AUAudioUnits. And in AUAudioUnit you can have your own render blocks. Additionally, as of iOS 11 you have manual rendering options in AVAudioEngine (https://developer.apple.com/videos/play/wwdc2017/501/)

                            • Re: AVAudioEngine & render callback
                              thomas.hezard Level 1 Level 1 (0 points)

                              Wow, I missed that, thank you. Block-based real-time audio rendering looks very promising !

                              AVAudioEngine seems to finally approach the "fully-featured" state, now I understand why they want to deprecate AUGraph and AudioToolbox in favor of AVFoundation and AVAudioEngine.

                            • Re: AVAudioEngine & render callback
                              hotpaw2 Level 2 Level 2 (80 points)

                              I also heard the bit in a WWDC session about AUGraph being deprecated.  I wanted a future-proof solution using AVAudioEngine instead.  So I wrote a test app that instantiates an AUAudioUnit subclass with a callback block.  The unit is then connected to AVAudioEngine to play the generated sound samples.  Seems to work under iOS (device and Simulator), and with fairly low latency (small callback buffers).  The source code for my test app is posted on github (search for hotpaw2).  There's a hotpaw2 github gist on recording audio as well.  Let me know if any of that helps, or if there is a better way to do this.