I created a multitimbral sampler based on 16 instances of AVAudioUnitSampler (one per MIDI channel/part).
It plays fine when receiving MIDI messages (or using the application virtual keyboard) on a single part.
However I tried to play a Midifile using AVAudioSequencer (assiging each AVMusicTrack destination to the corresponding AVAudioUnitSampler instance). It uses 3 parts, one with a pad (samples up to 30s - 100MB total for the aupreset, many simultaneous notes - up to 6 - with many sustain messages), one with a bass sound (a single note at a time, 10MB size aupreset), and one with a lead sound (also one note at a time).
Some notes are cut before end or do not play (mainly for the third part), as if there weren't resources left. I'm using the Simulator, and can't try anymore on my real iPad (won't boot since and will need repair or replacement).
The Xcode monitoring tab shows only 2 to 3 percent processor used (and 60 MB memory used). However the Simulator runs on an old mac (mid-2010 mac mini - Core2Duo 2,4 Ghz).
Is AVAudioUnitSampler suited to be using such way, or should I subclass AVAudioUnitMIDIInstrument (creating an audiounit with kAudioUnitSubTypeMIDISynth subtype as detailed in Gene De Lisa blog post, and loading a soundfont bank using kMusicDevicePropertySoundBankURL) ? Then the only way to change a part instrument would be to send a program change to the AVAudioUnitMIDIInstrument subclass ? I don't know how.
Or should I used the kAudioUnitSubType_DLSSynth ?
It plays fine when receiving MIDI messages (or using the application virtual keyboard) on a single part.
However I tried to play a Midifile using AVAudioSequencer (assiging each AVMusicTrack destination to the corresponding AVAudioUnitSampler instance). It uses 3 parts, one with a pad (samples up to 30s - 100MB total for the aupreset, many simultaneous notes - up to 6 - with many sustain messages), one with a bass sound (a single note at a time, 10MB size aupreset), and one with a lead sound (also one note at a time).
Some notes are cut before end or do not play (mainly for the third part), as if there weren't resources left. I'm using the Simulator, and can't try anymore on my real iPad (won't boot since and will need repair or replacement).
The Xcode monitoring tab shows only 2 to 3 percent processor used (and 60 MB memory used). However the Simulator runs on an old mac (mid-2010 mac mini - Core2Duo 2,4 Ghz).
Is AVAudioUnitSampler suited to be using such way, or should I subclass AVAudioUnitMIDIInstrument (creating an audiounit with kAudioUnitSubTypeMIDISynth subtype as detailed in Gene De Lisa blog post, and loading a soundfont bank using kMusicDevicePropertySoundBankURL) ? Then the only way to change a part instrument would be to send a program change to the AVAudioUnitMIDIInstrument subclass ? I don't know how.
Or should I used the kAudioUnitSubType_DLSSynth ?