(iOS) Issues with 'aumi' Audio Units and AVAudioEngine

Hi all,


I just filled the bug report 39600781, regarding 'aumi' (kAudioUnitType_MIDIProcessor) audio units and AVAudioEngine. In short: in iOS, when attaching 'aumi' audio units to an AVAudioEngine, the AVAudioEngine will never call their renderBlock / internalRenderBlock, rendering them (pun 😝) useless.


I write this post because I know that 'aumi' audio units in iOS are undocumented, thus some quirks are to be expected 🙂. But since some developers are already implementing them (I am beta testing their apps), I think it is important to raise awareness that 'aumi' audio units do not work with hosts that use AVAudioEngine. Note that hosts using AUGraph work, because they need to explicitly call the renderBlock method of their audio units. Yet since AUGraph is going to be deprecated... well, if "aumis" are to be officially introduced in the next iOS version, it is important to raise awareness about this particular issue.


That's all. Thanks for reading 😉.

Replies

I found by looking at the example hosting code that if you attach a dormant player node to each aumi, this would cause the render block to be called. That was until recently where this technique now does not work either.


Did you get any resolution with the bug report at all? Is there a workaround?

The bug report was closed, but I still have the same problem (renderBlock / internalRenderBlock is never called). And unfortunately, i know no workarounds around this issue 😟.

This is Still a problem.


Has anyone found a good workaround for this by now?

My current workaround is to create a "fake" Music Device node (a simple AUAudioUnit subclass that just generates silence), connect that to the main mixer and then connect each MIDI processor unit to that using AVAudioEngine connectMidi:to:format:block. In the block parameter I can then set my AUMIDIOutputEventBlock to get the MIDI data of the MIDI processor.

Which seems to work (but no idea about the stability of it yet, I only got it working around 30 minutes ago :-) ), but is kind of bad, as I need to keep a not really needed node active. One per MIDI processor that I use (because connectMidi is only many-to-one).

So: Does anyone have a better solution???
  • Hi Snarp,

    Is it possible if you can send the code. I tried your solution but it doesn't work, and I'm not sure at what step I'm doing something wrong.

    Thanks,

    Ali

Add a Comment

Any update on this issue?

On my app (KeyStage), I'm manually calling the renderblocks of all MIDI processor plugins in the AURenderObserverBlock while letting AVAudioEngine handle all instrument and effect audio units. But I'm having lots of issues with this approach: Getting EXC_BAD_ACCESS errors out of the blue, hitting high DSP for no apparent reason, etc.

I can't believe that Apple keeps promoting this AvAudioEngine framework and still hasn't resolved this major issue.

A MIDI processor node in AVAudioEngine needs an active output (MIDI) connection, rather than merely the usual audio connection as with other generator or effect nodes. So this may be the issue for many developers here. This seems to be also logical. It does not make any sense to call the rendering block of this MIDI processor, if the units does not actively connect to something via MIDI, as it merely generates MIDI events. If it isn‘t connected this way, it does not need to be called in any render cycle. ^^

So connecting it with only connect(..) probably does not work, it must be connected with connectMIDI(..) to something that actually is connected to an active audio signal flow.

And more: I made the test (rather discovered by accident) that a midi processor node does not even need any real audio connection (with connect()) but merely be attached to the engine. So in fact, the output bus of that unit does not even need to be connected to the engine at all!

But as soon you make a connection to an active receiver unit, that currently renders, so it will start getting called with this render cycle at all. I assume this ist just the way it is intended to work…

And I also do assume, that is is internally attached to the MIDI destination’s node rendering cycle and busses ( i.e. for sharing the correct samplerate - which is required for correct MIDI event processing ).

By the way, this addition to the AVAudioEngine with multiple connection points does not make much sense either. In practice you won’t do it that way. In practice you want multiple MIDI destinations merely with senseful filtering and routing mechanisms. And therefore this multi connection feature is basically unthought nonsense. However it does make sense, if single connections are required.

.

Addendums:

What actually bothers me much more is the general inconsistency of the entire AVAudioEngine implementation, after all those years. And the many missing, but necessary API functions inside the framework.

In the case of the bespoken MIDI processors :

Why do I get a forced hard crash, if I try to connect a music effect unit (understanding MIDI messages perfectly) with a MIDI processor unit ??? This is unlogical and also described very differently inside the API documentation. It actually should and also must work that way, as music effects are expecting MIDI messages to work, otherwise these are useless.

This and the entire s**tload of crash concert conduction reports, with just endless, ridiculously annoying, cryptic error messages without any real meaning drive me crazy.

Why for instance is there no function to call up the current connected items of an AVAudioNode instance and similar, very basic things, required for proper programming? This entire thing with the AVFoundation is a kind of highly inconsistent opaque puzzle game that unfortunately does not allow to code complex professional grade audio apps, on none of Apples platforms, by the way. If you want to do so, you probably must build on something completely different, from ground up.

As soon a project becomes more complex with AV than just connecting a simple audio player or sampler with some additional effects for playback or such, it gets into a complete dysfunctional nightmare of randomly acting audio blackboxes with some ‚gone crazy‘ conductor, giving the impression, this all was merely made for joking.

The opaqueness and the limitness of this framework (like many other Apple frameworks too) is near unacceptable. But an alternative framework for audio apps with Swift from Apple is just not available, nor planned or marked deprecated.

This all is basically a kind of waste of precious development time, I had to discover, lately and unfortunately …

.