AUAudioUnit – Impossible to handle frame count limitation in render?

Summary: I've created an AUAudioUnit to host some third party signal processing code and am running into a edge case limitation where I can only process and supply output audio data (from the internalRenderBlock) if it's an exact multiple of a specific number of frames.

More Detail:

This third party code ONLY works with exactly 10ms of data at time. For example, say with 48khz audio, it only accepts 480 frames on each processing function call. If the AUAudioUnit's internalRenderBlock is called with 1024 frames as the frame count, I can use the pullInputBlock to get 480 frames, process it, another 480 frames, and process that, but what should I then do with the remaining 64 frames?

Possible Solutions Foiled:

a) It seems there's no way to indicate to the host that I have only consumed 960 frames and will only be supplying 960 frames of output. I thought perhaps the host would observe that if the outputData ABL buffers have less than the frame count passed into the internalRenderBlock, that it might appropriately advance the timestamp only by that much the next time time around, but it does not.

So it's required that all the audio be processed before the block returns, but I can only do that if the block is requested to handle exactly a multiple of 10ms of data.

b) I can't buffer up the "remainder" input and process it on the next internalRenderBlock cycle because all of the output must be provided on return as discussed in A.

c) As an alternative, I see no way to have the unit explicitly indicate to the host, how many frames the unit can process at a time. maximumFramesToRender is the host telling the unit (not the reverse), and either way it's a maximum only, not a minumum as well.

What can I do?

Answered by swillits in 725775022

DTS has confirmed to me that there really is no API path which allows the audio unit to coordinate with the host in such a way to only process a given number of frames (or multiple of) at a time. Their solution was to just insert latency into the stream by buffering the incoming audio and spitting out silence until enough audio has come in that we can always process the internally-required number of frames (10ms / 480 in my example above) and send that to the output. Introducing latency is not a great solution, but it'll work.

Accepted Answer

DTS has confirmed to me that there really is no API path which allows the audio unit to coordinate with the host in such a way to only process a given number of frames (or multiple of) at a time. Their solution was to just insert latency into the stream by buffering the incoming audio and spitting out silence until enough audio has come in that we can always process the internally-required number of frames (10ms / 480 in my example above) and send that to the output. Introducing latency is not a great solution, but it'll work.

AUAudioUnit – Impossible to handle frame count limitation in render?
 
 
Q