Proper way to pass samples into/out of RemoteIO callbacks?

According to the Audio Unit documentation (and 2016 WWDC Session 507), inside the real-time Input or Render Callbacks:


You can't block.

You can't use mutexes.

You can't access the file system or sockets.

You can't even call a dispatch "async" because it allocates continuations,

Especially important for the less than 3 mSec real-time callback buffers.

It seems both the current Swift language definition and the ARM architecture memory ordering model allow the data passed between threads and processors to be scrambled in visible completion order. So that means an audio sample buffer could have holes (the last sample written, the second-to-last not written and thus garbage?) when accessed from the audio thread, depending on the code.

The RemoteIO Audio Unit seems pretty useless unless there is a way to get data in (from the UI for synthesis, etc.) or out (to the UI for visualization or analysis, etc.)

So is there a documented way to absolutely safely get samples into or out of an Audio Unit callback? Without the theoretical possibility of random stale data in the sample array (or other data structures)? Guaranteed to work in either code written in Swift and/or on ARM processors (e.g. all iOS devices)? Without breaking the real-time rules above.

Currently I'm jumping out of the Swift Audio Unit v3 callback block into some C code that does (libkern/OSAtomic.h) OSMemoryBarrier() calls between every important parameter struct element access (plus matching barriers on the UI end), but that seems expensive and inelegant for just passing audio samples into or out of lock-free circular buffers. And it's the only non-Swift code in my newest app.

Better solutions?

Replies

Any updates on this? I'm also wondering how to go about using the AUAudioUnit v3 API in Swift. All the callbacks are nicely exposed as objects, but what is the recommended way of using them? If that is falling back to C, then why bother exposing these APIs like this?