Audio renderer fails to render CMSampleBuffer

I am trying to render audio using AVSampleBufferAudioRenderer but there is no sound coming from my speakers and there is a repeated log message.


[AQ] 405: SSP::Render: CopySlice returned 1


I am creating a CMSampleBuffer from an AudioBufferList. This is the relevant code:


var sampleBuffer: CMSampleBuffer!
try runDarwin(CMSampleBufferCreate(allocator: kCFAllocatorDefault,
                                   dataBuffer: nil,
                                   dataReady: false,
                                   makeDataReadyCallback: nil,
                                   refcon: nil,
                                   formatDescription: formatDescription,
                                   sampleCount: sampleCount,
                                   sampleTimingEntryCount: 1,
                                   sampleTimingArray: &timingInfo,
                                   sampleSizeEntryCount: sampleSizeEntryCount,
                                   sampleSizeArray: sampleSizeArray,
                                   sampleBufferOut: &sampleBuffer))

try runDarwin(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
                                                             blockBufferAllocator: kCFAllocatorDefault,
                                                             blockBufferMemoryAllocator: kCFAllocatorDefault,
                                                             flags: 0,
                                                             bufferList: audioBufferList.unsafePointer))
try runDarwin(CMSampleBufferSetDataReady(sampleBuffer))


I am pretty confident that my audio format description is correct because CMSampleBufferSetDataBufferFromAudioBufferList, which performs a laundry list of validations, returns no error.


I tried to reverse-engineer the CopySlice function, but I’m lost without the parameter names.


int ScheduledSlicePlayer::CopySlice(
   long long,
   ScheduledSlicePlayer::XScheduledAudioSlice*,
   int,
   AudioBufferList&,
   int,
   int,
   bool
)


Does anyone have any ideas on what’s wrong? For the Apple engineers reading this, can you tell me the parameter names of the CopySlice function so that I can more easily reverse-engineer the function to see what the problem is?

If your app does not stop on the message `[AQ] 405: SSP::Render: CopySlice returned 1`, it may be just a debug message inside framework and you should better ignore it and find other reasons why no sound coming.


I recommend you first to check if you really made your `audioBufferList` properly. Can you show how you made it?

I have created a sample project that exhibits the problem. The audio plays if isInterleaved == true || channelCount == 1, but it doesn’t play otherwise. Thanks for helping!


---


I am pretty much at a dead end with reverse engineering. For posterity’s sake, here is what I found when isInterleaved == false && channelCount == 2:


  • ScheduledSlicePlayer::CopySlice(long long, ScheduledSlicePlayer::XScheduledAudioSlice* arg1, int, AudioBufferList& arg3, int, int, bool)
    • Copies data from the AudioBufferList pointed to by arg3 into the AudioBufferList nested in arg1.
    • Fails because the audio buffer count of the source AudioBufferList (2) is not equal to the audio buffer count of the destination AudioBufferList (1).
  • AudioQueueObject::ConvertOutput(AudioQueueObject::ConverterConnection*, AQBufferCommand* arg1, int, bool)

    It initializes the destination AudioBufferList with 1 AudioBuffer when `*(int32_t *)(arg1+0x8) == 1`. arg1+0x8 is likely an enum; its type is AQCommand::ECommand.

  • AQBufferCommand::AQBufferCommand(AQCommand::ECommand arg0, AQBuffer*, unsigned int, AudioStreamPacketDescription const*, XAudioTimeStamp const&)

    Sets *(int32_t *)(this+0x8) = arg0.

  • AudioQueueObject::EnqueueBuffer(AudioQueueBuffer*, unsigned int, AudioStreamPacketDescription const&, int, int, unsigned int, AudioQueueParameterEvent*, XAudioTimeStamp const&, XAudioTimeStamp *)

    Calls AQBufferCommand::AQBufferCommand with arg0 = 1.

For any Apple engineers who are reading this, I’ve filed this bug as rdar://45068930.

(Moving to a reply rather than a comment to be more easily found.)

I believe that interleaving is the key. I see the same symptoms on planar formats, but interleaved formats work, even for 2 channel audio. I expect this is an undocumented requirement (and an Apple bug). Tested on iOS 16.1.

Audio renderer fails to render CMSampleBuffer
 
 
Q