[AVAudioSession setPreferredSampleRate:] use

"Discussion: This method requests a change to the input and output audio sample rate."


I have 2 questions:

  1. In combination with [AVAudioSession sampleRate], the "sample rate" is referring to the hardware input and output sample rates? Why would they necessarilly be coupled?
  2. If I'm using a Remote IO Audio Unit to only write audio samples, do I care at all about this? Ex: as long as I'm able to set the stream format of my Audio Unit instance's output bus to whatever I want (say, 44.1kHz), do I care that about what [AVAudioSession sampleRate] is? (Performance aside)

Accepted Reply

1. Yes it controls both. Currently the session is limited to a single sample rate setting for both input and output.


2. Depends on what you're doing and if you care about sample rate conversion. If you don't, then it doesn't matter what the device sample rate is vs. your client format sample rate (although, running the hardware at 16kHz while you require 44.1kHz would be quite odd, while it's generally fine to run the device at higher sample rate than your client side format if required).


If you don't want any rate conversion done then you would indeed care on some level. Sometimes there are other considerations as well, for example on newer devices the built-in speaker hardware only supports 48kHz so asking for 44.1kHz will fail in that case so you can't make any assumptions. Additionally writing to a file where you may have another rate conversion happen (file format sample rate set is 16kHz while the client format is 44.1kHz and you've set the device format to 48kHz for whatever reason). So, performance aside you could think about audio quality, audio data size and even power considerations.

Replies

1. Yes it controls both. Currently the session is limited to a single sample rate setting for both input and output.


2. Depends on what you're doing and if you care about sample rate conversion. If you don't, then it doesn't matter what the device sample rate is vs. your client format sample rate (although, running the hardware at 16kHz while you require 44.1kHz would be quite odd, while it's generally fine to run the device at higher sample rate than your client side format if required).


If you don't want any rate conversion done then you would indeed care on some level. Sometimes there are other considerations as well, for example on newer devices the built-in speaker hardware only supports 48kHz so asking for 44.1kHz will fail in that case so you can't make any assumptions. Additionally writing to a file where you may have another rate conversion happen (file format sample rate set is 16kHz while the client format is 44.1kHz and you've set the device format to 48kHz for whatever reason). So, performance aside you could think about audio quality, audio data size and even power considerations.

Thank you very much, great answer!


So to recap, a sane strategy would be:

  • Audio files are saved at 44.1kHz
  • Application is programmed to output audio at 44.1kHz
  • I'm able to set the stream format of my Remote IO Audio Unit to 44.1kHz
  • I *try* to set the device's output audio to 44.1kHZ
    • If this succeeds - we're good, no resampling is needed. Perfecto.
    • If not, *someone* would have had to do resampling between my audio files, and hardware anyways - so let Apple do this automatically for me.