AVAudioSession : understanding and controlling Input/Output latency

Dear all,


I'm trying to get a full understanding of the AVAudioSession, AUGraph and AudioUnit classes in order to build clean and stable audio apps with precisely defined behaviours.


I'm stuck right now on one point : input and output latency (more specifically input latency). Basically, my questions are the following :

1. Where do the latencies come from ?

2. On what parameters do they depend ?

3. How can I reduce them ?


For now, I have noticed that the AudioSession Mode "AVAudioSessionModeMeasurement" results in a very low latency, but with also a very low input volume (and I guess less audio input processing) not really usable for a music app.


On an iPad Air 2, with the built-in microphone :

- with the AVAudioSessionModeMeasurement, I obtain an input latency of 0.1 ms !

- with the AVAudioSessionModeDefault, I obtain on input latency of 58ms !


Any tips about my three questions ?


Thank you


Thomas

There are likely audio filters and hardware sample buffers that affect latency, but these are opaque to the app and may differ between device hardware models.


The AVAudioSession preferredBufferDuration setting has an obvious affect on latency. The actual RemoteIO buffer latency will often vary between foreground and background mode and whether any other audio apps are running. Latency might also be larger if the RemoteIO buffer sample rate is different from the hardware sample rate. Don't assume that 44.1 is the hardware sample rate on the newest devices.


You might want to measure the actual input to output latencies (with an oscilliscope, et.al., some reports say 7 to 11 mS actual min) to see if and how the latency numbers you obtain correspond.

AVAudioSessionModeMeasurement removes (or minimizes according to the header comments) all system-supplied signal processing for I/O which makes sense since an application wanting to do any type of measurement requires the cleanest signal. Different routes and modes will indeed change things so don't assume anything about the audio system.


On the system, latency is measured by:

Audio Device I/O Buffer Frame Size + Output Safety Offset + Output Stream Latency + Output Device Latency


If you're trying to calculate total roundtrip latency you can add:

Input Latency + Input Safety Offset to the above.

The timestamp you see at the render proc. account for the buffer frame size and the safety offset but the stream and device latencies are not accounted for.


iOS gives you access to the most important of the above information via AVAudioSession and as mentioned you can also use the "preferred" session settings - setPreferredIOBufferDuration and preferredIOBufferDuration for further control.

/ The current hardware input latency in seconds. */

@property(readonly) NSTimeInterval inputLatency NS_AVAILABLE_IOS(6_0);

/ The current hardware output latency in seconds. */

@property(readonly) NSTimeInterval outputLatency NS_AVAILABLE_IOS(6_0);

/ The current hardware IO buffer duration in seconds. */

@property(readonly) NSTimeInterval IOBufferDuration NS_AVAILABLE_IOS(6_0);


Audio Units also have the kAudioUnitProperty_Latency property you can query.

Thanks, this is more or less what I understood.


However, I tried to build a simple case study but I can't get it to work fine.


Within a simple AUGraph (PlayAndRecord AVAudioSession mode) containing only a RemoteIO, I set up a render callback in which, sequentially,

- I play a test sound through the speaker (very short sinusoidal burst),

- I capture sound from microphone and store the data.


If I understood well, the latency between the two files should be equals to inputLatency+outputLatency+IOBufferDuration. But I allways measures a far greater latency...


I can't get a proper scheme of the temporal process with all temporal delays from the audio file reading to audio file writing, through output and input latency. So maybe my theoretical latency is false.


Any idea ?

What delay value are you getting, and what are the system reported latencies? Also, what sample rate on what device, as that may make a difference in how much extra (unreported?) processing is being done by the hardware and driver.

I think you need to add the IOBufferDuration twice; once on the input side and once on the output side:

totalLatency = (inputLatency + IOBufferDuration) + (outputLatency + IOBufferDuration)

This is what I use in a recording app for automatic latency compensation. But I think there might be other, hidden latencies involved…

There are also 2 safety offsets that may need to be added. e.g. the audio buffers are not supplied or taken "just in time", but seem to be delayed by some fraction of their duration.

How do you get the safety offsets on iOS? I have searched everywhere.

Is this how you would calculate the total roundtrip latency as you described with AVAudioSession?


([AVAudioSession sharedInstance].IOBufferDuration + [AVAudioSession sharedInstance].inputLatency + [AVAudioSession sharedInstance].outputLatency);


My understanding is that both inputLatency and outputLatency accounts for device and stream latency. I can find how to get the input and output safety offsets in iOS.


Also, should add the IOBufferDuration twice for the roundtrip? One for output and one for input?

AVAudioSession : understanding and controlling Input/Output latency
 
 
Q