How to handle different sample rates in AVAudioEngine?

This is on a Mac Mini M1 with OSX Monterey.

I am trying to write an audio network using AVAudioEngine as opposed to AUAudioGraph (which I understand is deprecated in favor of AVAudioEngine). My code works properly with AUAudioGraph.

The input is a microphone which has a sample rate of 8 kHz. In the render proc, the data is written to a ring buffer. Debugging shows that the render proc is called every 0.064 seconds and writes 512 samples (8000 * 0x064 = 512).

The program creates an AVAudioSourceNode. The render block for that node pulls data from the above ring buffer. But debugging shows that it is trying to take 512 samples about every 0.0107 seconds. That works out to 48000 samples per second, which is the output device sample rate. Obviously the ring buffer can't keep up.

In the statement connecting the above source node to the AVEngine's mixer node, I specify (at least I think I am) a sample rate of 8000, but it still seems to be running at 48000.

let inputFormat = AVAudioFormat(
commonFormat: outputFormat.commonFormat,
sampleRate: 8000,
channels: 1,
interleaved: outputFormat.isInterleaved)

engine.connect(srcNode, to: mixerNode, fromBus: 0, toBus: 0, format: inputFormat)

Also, looking at the microphone input using Audio MIDI Setup shows that microphone format is 8000 Hz, 1 channel 16-bit integer, but when I examine the input format of the AudioNode it is reported as 8000 Hz, 1 channel 32-bit float. The input node is using HAL. Obviously, somewhere in the internals of the node the samples are being converted from 16-bit ints to 32-bit floats. Is there a way to also have the sample rate changed?

Am I doing this wrong? The HAL node was used with AUAudioGraph. Is there a different node that should be used with AVAudioEngine? I see that AVAudioEngine has an input node, but it seems if I connect it to the microphone, the input goes straight to the hardware output without going through the mixer node (where I want to mix in other audio sources).

The original AUGraph code was modeled after the code in "Learning Core Audio" by Adamson & Avila, which, although it is old (pre-dating Swift and AVAudioEngine), is the only detailed reference on CoreAudio that I have been able to find. Is there a newer reference?

Thanks, Mark

I've done additional testing and it appears my conclusions about the sample rate were wrong. When I change the format to 8000 Hz sample rate, while the srcNode render block is still called ever 0.0106 seconds, it is only requesting 85 or 86 samples which corresponds to the lower sample rate. So I think the ringBuffer is working.

How to handle different sample rates in AVAudioEngine?
 
 
Q