AVAudioPlayerNode with generated data?

Hi,


What I need to do is have a custom AVAudioPlayerNode that generates samples for playback. I am unclear as to how this works.


I know that the AVAudioPlayerNode has a function to scheduleBuffer: but this seems to imply that I would need to create all of the audio up-front, which is not what I want to do. I need to create it a little bit at a time on-demand (like when you use AudioQueues).


Is this possible? How is it done?


Also, if anyone can recommend a tutorial, book, or some other resource that explains how this family of functions works, I'd really appreciate that. Yes, Apple documents each class and function, but doesn't provide a good high-level overview as to how it all works. If you click on the link that says AVFoundation programming guide, it doesn't talk about any of this.


Thanks,

Frank

Accepted Reply

You can schedule buffers for playback as they are being generated. There's no requirement that a buffer you schedule contains everything you want played back at once or that you may only ever schedule a single buffer - this is what you do when using an output Audio Queue, it's the same concept. Check out the WWDC session videos dedicated to the discussion of the AVAudioEngine which should clarify the architecture for you. There are two. 2014 AVAudioEngine In Practice and 2015 What's New In Core Audio.

Replies

You can schedule buffers for playback as they are being generated. There's no requirement that a buffer you schedule contains everything you want played back at once or that you may only ever schedule a single buffer - this is what you do when using an output Audio Queue, it's the same concept. Check out the WWDC session videos dedicated to the discussion of the AVAudioEngine which should clarify the architecture for you. There are two. 2014 AVAudioEngine In Practice and 2015 What's New In Core Audio.

Great, thanks for the tip. I realized that if you just schedule one buffer at a time and wait for the callback before scheduling the next one, it causes gaps in the audio, but I was able to get around this by scheduling at least one extra buffer.


Appreciate the pointers to the videos. I always forget to look there.


Frank