AVAudioEngine & render callback

Hello all,


I'm currently trying to integrate AVAudioEngine into my apps, but I'm stuck to what seems to be the most basic question!


Here is my problem :


I have a huge sound processing and generating program, with a mixed C/Objective-C core code. Until now I've been using AVAudioSession (initiated with a graph, a mixer unit, prefered and maximum number frames etc.), and I set a render callback with AUGraphSetNodeInputCallback, which links to my core code, sends audio input to it if needed, and fills up the AudioBufferList for the hardware output.


I now want to transfer all the graph management from AVAudioSession to AudioEngine, in order to simplify audio conversion, and implement new features with the new AVAudioNode system. But for this, I need to plug my render callback somewhere. If I understood well what was presented in the WWDC videos about CoreAudio and AVAudioEngine (WWDC 2014 sessions 501 & 502, WWDC 2015 sessions 507 and 508), I need to output my generated audio into an AVAudioPCMBuffer and read it with a AVAudioPlayerNode. My question is now :

Where and how do I have to plug my render callback function to fill the AVAudioPCMBuffer within my core code ???


I've been searching about this for hours on the web, and all I found is how to use AVAudioPlayerNode to play a file, or how to read an AVAudioPCMBuffer generated in advance, but I can't find how to fill an AVAudioPCMBuffer with a custom render callback, and read it with a AVAudioPlayerNode synchronously and in real-time inside an AVAudioEngine.


Any advice?


Thank you all


Thomas

Replies

I completely see your point about running the analysis job outside the audio thread, even though :

- I never had any trouble related to time spent on analysis inside the audio thread (I allways take a close look on computation cost of audio generation/processing/analysis algorithms before integrating one inside the audio render loop).

- It is most common in signal processing to need a precise sceduling of analysis over the input data, and so it does seem weird to me to schedule the analysis job with UI calls (let's take the example of a classic f0-detection algorithm working on 2048-frames input buffers with a 1024-frames overlap). Even more when the result of the analysis is not used for UI but for storage or real-time audio feedback. But I must admit that I have much more background in theoretical signal processing than in programming, so I'm far from an expert in threading 😉


Moreover, after spending some time testing and playing around with the one-to-many connection, I realised that this problem is the same in a pure audio processing context, where it becomes much more serious. Take for example my previous graph and replace "analysis" with "side FX". Well, in that case also, playing with the switches (connecting/disconnecting the inputs of the "side FX mixer" or the output of the "side FX process") will inevitably make your engine bug after a few occurences, even if you stop and restart the engine at each connection change (which make audio pops by the way).

And in the case of an advanced modular audio app, this feature doesn't seem far fetched but really usefull to me !

I'm currently building a minimal project to illustrate that problem and send a bug report. I'll try to make it available to you if I can manage to use none of the proprietary code from my company.

Thanks again for all your help, it is much appreciated 🙂It seems that we're not so many out there using these advanced features of AVFoundation and AVAudioEngine, and I must say that the documention, as well as the WWDC videos about these subjects, are a bit frustrating when you try to understand how to implement these new promising features 😉

If you can create a reproduceable example, please send a bug report to Apple. That's how they prioritize fixes. There should be a link at the bottom of the forum web page.


BTW, my suggestion of audio analysis outside the graph was for visualization purposes. For live (re)synthesis, analysis probably should go in the audio ouput graph for lowest latency. Another suggestion: when making a change to the graph, you might want to consider doing a quick fade to/from silence before/after the change.

Yes, I already sent the bug report to Apple (actually I regularly send bug reports about AVFoundation).


You can find my sample project on GitHub, the exact same I sent to Apple in the bug report, my username is AudioScientist. I did not implemented the fade in/out yet, as my first thought was that I would be able to plug/unplug the analysis module without pausing the engine. But as you will see in the project, problems occur with or without pausing the engine.

I'm really interested in your feedback about this. Don't hesitate to contact me if you have any question.

hey guys, looks like you are both experienced with AVAudioEngine. Can you help me with this?

https://stackoverflow.com/questions/48911800/avaudioengine-realtime-frequency-modulation

As you can see in my snippet, I just want to take the inputNode, somehow modulate the frequency (for testing used reverb) and then send it to the outputNode. But instead of reverb I would like to do some manual operations on the signal e.g. invert it (-1). Don't know what the correct approach for such a task is.