I'm working on an application that outputs audio via an AURenderCallback and am trying to fully test a situation (similar to https://github.com/libpd/libpd/issues/177) that so far has only come up in the simulator:
What happens is that rather than getting a consistent nice power-of-two inNumberFrames, the callback gets called with a varying amount of frames (e.g. toggling between 557 and 558 each time my callback is called) when running on the simulator. Because I am processing audio through an algorithm that works in power-of-two frames, I have had to add some buffering to handle this situation.
My code seems to be working well, but I have been unable to reproduce the issue on an actual device. I want to confirm it works well there too, including in conjunction with some external audio devices that aren't available to the simulator. In the libpd thread linked above, it seemed the "culprit" is sample rate conversion and could be triggered by using a Bluetooth headset or an iPhone 6S with a lightning headphone adapter. However we have tried those scenarios and the device keeps giving us a nice 1024 frames each callback.
Has something changed with the audio stack in a relase since July of last year? How can I trigger the behavior we see in the simulator (odd and variable-sized Core Audio output buffers) on an iOS hardware device?
In the libpd thread someone claimed:
…this is the scenario on the iPhone 6s:Instead of render callbacks with a buffer size of 256 (no SRC), you get four callbacks at 235 followed by a fifth at 236, in a repeated fashion. This is the effect of the [Sample Rate Conversion] between 48kHz/44.1kHz: (44100/48000) * 256 = 235.2
However I had a team member test on a 6S and the logs they gave back showed no indication of this happening. Is there an extra factor that must be present before the odd-sized callbacks kick in?