maximumframestorender

init(componentDescription: AudioComponentDescription,options: AudioComponentInstantiationOptions = []).


I built an AudioUnitV3 effect and I set the variable "maximumFramesToRender" within the above method. My preference is to make this value 256 or 1024, however it doesnt appear to matter what I set this value to because always changes to 512. The effect does work, however I cant change the number of frames to render.


I would dispense with creating an effect if I could change the to render a maximum of 256 frames for an AVAudioPlayerNode.


self.maximumFramesToRender = 256


The documentation shows you must set the value before resources are allocated and I have done so.


I have two questions


1) Is there more than one place you must set this value in an AudioUnitV3 effect unit

2) Can you set the frame rendering for an AVAudioPlayerNode, regardless of whether you are online or offline rendering?


try! self.audioEngine.enableManualRenderingMode(.offline, format: self.audioFormat, maximumFrameCount: 4096)


I do realize that for manual rendering, I can change the maximumFrameCount to 256, however I want either the effect or the player node to render at a different rate because I built a render block around specific timings. So I need to this specific effect or node to render at a defined rate regardless of whether all other downstream nodes are rendering at a higher frame rate.

Dear MisterE,


I posted a related questions a few days ago : https://forums.developer.apple.com/thread/111714


I don't know about offline render as I never used it, but here is what I can tell you about online mode from my experience.


From what I understood, the AVAudioEngine set the maximumFramesToRender of every single node it hosts when prepare is called (or when start is called if prepare has not been called before). So whatever you put in the init function of you AUAudioUnit will be overriden by the AVAudioEngine.


AVAudioEngine doesn't seem to have any possibility to set the maximumFramesToRender for online mode. I found a workaround by using AudioUnitSetProperty and it seems to work:


AudioUnitSetProperty(self.audioEngine.outputNode.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &_maxFramesPerSlice, sizeof(_maxFramesPerSlice));


However, I'm not sure I'm supposed to do it, asthis page mentions that you're not supposed to set the MaximumFramesPerSlice of ouput audio units, and the value can actually be modified by the AVAudioEngine (especially when the audio configuration changes). My guess is that you're not supposed to have any control over maximumFramesToRender inside an AVAudioEngine in online mode, but I never found a proper confirmation as that assertion.


Here is what I can tell you. I'm looking for a proper definitive answer myself, so I'll let you know if I can find anything more.


That would be nice to have some insight from the Apple team !

>>AVAudioEngine doesn't seem to have any possibility to set the maximumFramesToRender for online mode. I found a workaround by using AudioUnitSetProperty and it seems to work:



Excellent find. Unfortunately this did not help my particular case.



>>My guess is that you're not supposed to have any control over maximumFramesToRender inside an AVAudioEngine in online mode, but I never found a proper confirmation as that assertion.



I think you are correct, except when you are using your own audiounts. There the documentation states to set the maximumFramesToRender before resources are allocated.


Its interesting that you mention preparing to start the engine. I previously tried to prepare the engine, however it didnt change the outcome. The funny thing is that I created the audiounit effect for one reason. It was so that I could have control over maximumFramesToRender. I spent a two weeks getting the unit to work solely in Swift. I finally I set the frames size only to discover it was being overwritten.


If I figure anything out or someone provides a solution, I will be sure to let you know.

Yes, exactly, the maximumFramesToRender is overwritten by the system anyway.


Juts a few precisions to be sure I explained clearly.


First a bit of context. I've been working on iOS apps with real-time audio synthesis, processing and analysis code for several years. Until recently, I was using AUGraph with the render callback mechanism. I've been playing with AVAudioEngine for a while and started to switch my apps to it last year, when Apple annonced they will soon deprecate AudioToolbox and AUGraph. So now I'm using AVAudioEngine mainly with "home-made" audio units to embed my real-time code, and only in online mode.


I've allways believed you're supposed to set the maxFramePerSlice of the output audio unit yourself, and that this is the one which makes sense for the whole graph. But I found the previsouly mentionned page recently, stating that you're not supposed to set maxFramePerSlice on the output node (or remoteIO within the good old AUGraph).


Here is what I observed during my tests with AUAudioUnits inside AVAudioEngine. To see it happen, you can override your AUAudioUnit's maximumFramesToRender setter with the following line:

super.maximumFramesToRender = maximumFramesToRender;

and put a NSLog or breakpoint inside it.

- The maximumFramesToRender property of AUAudioUnits is set every time the AVAudioEngine prepares, or starts if not previously prepared, just before allocateRenderResourcesAndReturnError is called.

- The value of maximumFramesToRender set to all AUAudioUnits inside the AVAudioEngine seems to the one from the OutputNode.

- This value can be modified live by the system, even if you set it with the previously mentionned workaround. It happens especially when the input is enabled or disabled, or when the audio I/O configuration change (ConfigurationChange notification of AVAudioEngine). This behaviour does not seem to be the same on every iPhone and iPad, I think it depends on the audio hardware chip.


One example of the weird things that can happen to illustrate this behaviour:

- Set the AVAudioSession to standard 48kHz sample rate with the PreferedIOBufferDuration you need. I say 48kHz because this has been the new standard for a few years, since the iPhone 6s I think. Audio hardware of some of the newest phones don't even support 44.1kHz anymore. Build an AVAudioEngine with some nodes, custom AUAudioUnit etc, and connect everything with a 48kHz format. When you start the AVAudioEngine, the standard maximumFramesToRender used for every nodes is usually (always?) 4096.

- While the app is running, connect a bluetooth audio speaker.

- The AVAudioSession switches to 44.1kHz, the AVAudioEngine is reset (meaning the ressources of all nodes are deallocated) and the ConfigurationChange notification is triggered. If you don't change the internal connections and start back the AVAudioEngine as is, the nodes will keep working at 48kHz, and the output node will take care of the 48Khz to 44.1kHz sample rate conversion. The thing is that the output node still has a 4096 maximumFramesToRender, but at 44.4kHz this time. In order to be able to satisfy that, the AVAudioEngine will then set the maximumFramesToRender of each nodes working at 48khz to something around 4460 when it starts (or prepares) again after the configuration changed, before reallocating the ressources.


To conclude:

- It seems that you're not supposed to control the maximumFramesToRender of the output node, and therefore of the whole graph.

- You have to consider in your programming that maximumFramesToRender will change sometimes and that the ressources will be deallocated and reallocted each time the configuration changes. The maximumFramesToRender can have weird values if there are sample rate conversions involved.

- All you can do is specify a preferedIOBufferDuration in the AVAudioSession to control the buffer size and therefore the audio latency of your app. While your app stays in foreground, it seems the system is quite obedient about that.

- When your app goes in background or when the screen goes to sleep, the system can decide to use large audio buffers and it seems we can't do much about it...



I may be wrong about some of these assumptions and conclusions, but unfortunately there are not much precise answers out there about all these questions...

Mister E, Perhaps you could include auto-stop-speech-stop-audio string
Mister E as expert informational research and research development specialists in broadcast systems and verbal analysis compliance. As a member of Assembly 6, I’ve been in forums for as many as 7years. Can we upgrade to perhaps making money
maximumframestorender
 
 
Q