CMSampleBufferSetDataBufferFromAudioBufferList returning -12731

I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.

I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.


- (void)copyBuffer:(CMSampleBufferRef)buf {
    if (_readyForMoreBytes == NO)
    {
        return;
    }

    AudioBufferList abl;
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);

    UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);
    BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);

    if (!bytesCopied){
        /
        _readyForMoreBytes = NO;

        if (size > kRescueBufferSize){
            NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame");
        } else {
            if (rescueBuffer == nil) {
                rescueBuffer = malloc(kRescueBufferSize);
            }

            rescueBufferSize = size;
            memcpy(rescueBuffer, abl.mBuffers[0].mData, size);
        }
    }

    CFRelease(blockBuffer);
    if (!self.hasBuffer && bytesCopied > 0)
    {
        self.hasBuffer = YES;
    }
}


Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731).


-(CMSampleBufferRef)processOutput
{
    if(self.offline == NO)
    {
        return NULL;
    }

    AudioUnitRenderActionFlags flags = 0;
    AudioTimeStamp inTimeStamp;
    memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
    inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
    UInt32 busNumber = 0;

    UInt32 numberFrames = 512;
    inTimeStamp.mSampleTime = 0;
    UInt32 channelCount = 2;

    AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
    bufferList->mNumberBuffers = channelCount;
    for (int j=0; j<channelCount; j++)
    {
        AudioBuffer buffer = {0};
        buffer.mNumberChannels = 1;
        buffer.mDataByteSize = numberFrames*sizeof(SInt32);
        buffer.mData = calloc(numberFrames,sizeof(SInt32));

        bufferList->mBuffers[j] = buffer;

    }
    CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit");

    CMSampleBufferRef sampleBufferRef = NULL;
    CMFormatDescriptionRef format = NULL;
    CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
    AudioStreamBasicDescription audioFormat = self.audioFormat;
    CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), @"CMAudioFormatDescriptionCreate");
    CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), @"CMSampleBufferCreate");
    CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList");

    return sampleBufferRef;
}



static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData)
{
    int numberOfChannels = ioData->mBuffers[0].mNumberChannels;
    SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;

    /
    memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);

    MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;

    if (p.hasBuffer){
        int32_t availableBytes;
        SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);

        int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;

        int bytesToRead = MIN(availableBytes, requestedBytesSize);
        memcpy(outSample, bufferTail, bytesToRead);
        TPCircularBufferConsume([p getBuffer], bytesToRead);

        if (availableBytes <= requestedBytesSize*2){
            [p setReadyForMoreBytes];
        }

        if (availableBytes <= requestedBytesSize) {
            p.hasBuffer = NO;
        }  
    }
    return noErr;
}


The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)


CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180
  invalid = NO
  dataReady = NO
  makeDataReadyCallback = 0x0
  makeDataReadyRefcon = 0x0
  formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {
  mediaType:'soun'
  mediaSubType:'lpcm'
  mediaSpecific: {
  ASBD: {
  mSampleRate: 44100.000000
  mFormatID: 'lpcm'
  mFormatFlags: 0xc2c
  mBytesPerPacket: 2
  mFramesPerPacket: 1
  mBytesPerFrame: 2
  mChannelsPerFrame: 1
  mBitsPerChannel: 16 }
  cookie: {(null)}
  ACL: {(null)}
  }
  extensions: {(null)}
}
  sbufToTrackReadiness = 0x0
  numSamples = 512
  sampleTimingArray[1] = {
  {PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
  }
  dataBuffer = 0x0


The buffer list looks like this


Printing description of bufferList:
(AudioBufferList *) bufferList = 0x00007f87d280b0a0
Printing description of bufferList->mNumberBuffers:
(UInt32) mNumberBuffers = 2
Printing description of bufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
  [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)
}



Really at a loss here, hoping someone can help. Thanks,


In case it matters i am debuggin this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.

Accepted Reply

The kCMSampleBufferError_RequiredParameterMissing error -12731 when returned from CMSampleBufferSetDataBufferFromAudioBufferList can be quite perplexing. Why? Well, because this one generic error is returned in a number of different cases of "sanity check" failures and much like the historic paramErr -50 leaves much to the imagination.

CMSampleBufferSetDataBufferFromAudioBufferList is described in CMSampleBuffer.h as follows:

"Creates a CMBlockBuffer containing a copy of the data from the AudioBufferList, and sets that as the CMSampleBuffer's data buffer. The resulting buffer(s) in the sample buffer will be 16-byte-aligned if kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment is passed in."

OSStatus CMSampleBufferSetDataBufferFromAudioBufferList(
  CMSampleBufferRef sbuf, /*! @param sbuf CMSampleBuffer being modified. */
  CFAllocatorRef bbufStructAllocator, /*! @param bbufStructAllocator Allocator to use when creating the CMBlockBuffer structure. */
  CFAllocatorRef bbufMemoryAllocator, /*! @param bbufMemoryAllocator Allocator to use for memory block held by the CMBlockBuffer. */
  uint32_t flags, /*! @param flags Flags controlling operation. */
  const AudioBufferList *bufferList) /*! @param bufferList Buffer list whose data will be copied into the new CMBlockBuffer. */


Ok, that's pretty simple right. We need a CMSampleBufferRef we can make using CMSampleBufferCreate and an AudioBufferList. Let’s assume we have an ABL that was used to render some audio data into which is all working fine.

Where can we go wrong?

Before we list all the different cases a -12731 can be returned from this API, by far the number one reason is that the numSamples parameter passed to CMSampleBufferCreate doesn't jibe with the AudioBuffers buffer size / the audio formats bytes per frame calculation.

For example, let's assume a mono PCM 32bit float audio format rendering 512 frames. The numSamples passed to CMSampleBufferCreate will be 512 along with an ASBD (as a CMAudioFormatDescriptionRef) that looks something like this:

Printing description of outBufASBD:
(AudioStreamBasicDescription) outBufASBD = {
  mSampleRate = 44100
  mFormatID = 1819304813 'lpcm'
  mFormatFlags = 41
  mBytesPerPacket = 4
  mFramesPerPacket = 1
  mBytesPerFrame = 4
  mChannelsPerFrame = 1
  mBitsPerChannel = 32
  mReserved = 0
}


Let's take a look at the ABL:


Printing description of self->outputBufferList->mBufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
  [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x0000000105006200)
}


mDataByteSize is 2048

mBytesPerFrame is 4

2048 / 4 == 512 == numSamples passed to CMSampleBufferCreate == we're cool!

With that out of the way, here's the list of "sanity checks" that will return -12731 if you're doing it wrong:

1) sbuf or bufferList are NULL.

2) The bufferLists mNumberBuffers does not match the expected number of buffers (1 if interleaved or asbd.mChannelsPerFrame).

3) A specific AudioBuffers number of channels (mNumberChannels) does not match the expected number of channels per buffer (asbd.mChannelsPerFrame if interleaved or 1) (all buffers are verified).

4) A specific AudioBuffers data byte size (mDataByteSize) is 0 (all buffers are verified).

5) A specific AudioBuffers data pointer (mData) is NULL (all buffers are verified).

6) The byte size (mDataByteSize) of the very first buffer is not consistent for all supplied buffers, in other words we expect all buffer sizes to be equal.

7) The CMSampleBuffer passed in was created with a NULL sampleTimingArray.

8) The CMSampleBuffer passed in has a numSampleTimingEntries that is not equal to 1 for PCM.

9) The CMSampleBuffers sampleTimingArray[0].duration entry is different from 1, asbd.mSampleRate for PCM.

And finally the most common failure...

10) The numSamples parameter passed to CMSampleBufferCreate does not match the AudioBuffers buffer size / the audio formats bytes per frame value.

Holy sanity checking Batman!

Replies

I poked around some more and notice that when my AudioBufferList right before AudioUnitRender runs looks like this:


bufferList->mNumberBuffers = 2,

bufferList->mBuffers[0].mNumberChannels = 1,

bufferList->mBuffers[0].mDataByteSize = 2048


mDataByteSize is numberFrames*sizeof(SInt32), which is 512 * 4. When I look at the AudioBufferList passed in playbackCallback, the list looks like this:


bufferList->mNumberBuffers = 1,

bufferList->mBuffers[0].mNumberChannels = 1,

bufferList->mBuffers[0].mDataByteSize = 1024


not really sure where that other buffer is going, or the other 1024 byte size...


if when i get finished calling Redner if I do something like this


AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;


and pass newbuff off to CMSampleBufferSetDataBufferFromAudioBufferList the error goes away.


If I try setting the size of BufferList to have 1 mNumberBuffers or its mDataByteSize to be numberFrames*sizeof(SInt16) I get a -50 when calling AudioUnitRender

In my experience with CMSampleBufferSetDataBufferFromAudioBufferList and the -12731 error is a majority of the cases come down not to that call itself but the CMSampleBufferCreate call being passed an incorrect numSamples value when compared to what is actually contained in the ABL. You seem to have confirmed this yourself.


I've found that a convenient (easy?) way to mess around with these Core Media APIs is using one of the "AVCaptureAudioDataOutput To AudioUnit" samples (either the iOS one or the OS X one is fine). < https://developer.apple.com/library/mac/samplecode/AVCaptureToAudioUnitOSX/Introduction/Intro.html >


What's nice about the sample is that it captures using AVFoundation returning CMSampleBufferRefs but then processes them with a plain old AU requiring a call to CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer.


This allows you to see how this one conversion is done to get to a valid ABL you can pass to an AudioUnitRender call. The sample then just uses ExtAudioFile to write out the result.

But, it's now super easy to experiment further since you have this valid nicely returned ABL that contains your processed data and can now convert it back to a CMSampleBuffer.


Here's some test code injected into the OS X version of that sample after:

if ((noErr == err) && extAudioFile) { ...

Setting a breakpoint and stepping though while keeping tabs on data sizes and format settings is super helpful.

        /* Testing CMSampleBufferSetDataBufferFromAudioBufferList
            At this point we have a AudioBufferList containing the rendered audio from the delay effect
            which we are writing using extAudioFile
            Before we write it out, simply attempt to create a CMSampleBuffer from this ABL.
  
            Information we need:
            The ASBD describing the contents of the output ABL
            The number of frames rendered into the output ABL which is numberOfFrames as passed to AudioUnitRender
        */

        OSStatus status;

        CMAudioFormatDescriptionRef cmFormatDescrption;
        CMSampleBufferRef cmSampleBufferRef;

        AudioStreamBasicDescription outBufASBD = outputBufferList->GetFormat();

        CMSampleTimingInfo timingInfo = { CMTimeMake(1, outBufASBD.mSampleRate), kCMTimeZero, kCMTimeInvalid };

        status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
                                            &outBufASBD,          /
                                            0, NULL, 0, NULL, NULL,
                                            &cmFormatDescrption);  /
        if ( status ) { NSLog(@"CMAudioFormatDescriptionCreate error: %d", status); }

        status = CMSampleBufferCreate(kCFAllocatorDefault,
                                  NULL, false, NULL, NULL,
                                  cmFormatDescrption,
                                  numberOfFrames,      // must accurately reflect the amount of data in the ABL
                                  1, &timingInfo,
                                  0, NULL,
                                  &cmSampleBufferRef);
        if ( status ) { NSLog(@"CMSampleBufferCreate error: %d", status); }

        status = CMSampleBufferSetDataBufferFromAudioBufferList(cmSampleBufferRef,
                                                            kCFAllocatorDefault,
                                                            kCFAllocatorDefault,
                                                            0,
                                                            outputBufferList->ABL());
        if ( status ) { NSLog(@"CMSampleBufferSetDataBufferFromAudioBufferList error: %d", status); }

        // ***** End Testing Code --------------------


The other nice thing about these samples is that they use some Core Audio Public Utility classes like AUOutputBL and CAAudioBufferList which greatly simplify working with ABLs (I use them as much as I can) and with Core Audio we all know that simpler is betterer.


I had this code around so there you go, hope it helps you debug further.

The kCMSampleBufferError_RequiredParameterMissing error -12731 when returned from CMSampleBufferSetDataBufferFromAudioBufferList can be quite perplexing. Why? Well, because this one generic error is returned in a number of different cases of "sanity check" failures and much like the historic paramErr -50 leaves much to the imagination.

CMSampleBufferSetDataBufferFromAudioBufferList is described in CMSampleBuffer.h as follows:

"Creates a CMBlockBuffer containing a copy of the data from the AudioBufferList, and sets that as the CMSampleBuffer's data buffer. The resulting buffer(s) in the sample buffer will be 16-byte-aligned if kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment is passed in."

OSStatus CMSampleBufferSetDataBufferFromAudioBufferList(
  CMSampleBufferRef sbuf, /*! @param sbuf CMSampleBuffer being modified. */
  CFAllocatorRef bbufStructAllocator, /*! @param bbufStructAllocator Allocator to use when creating the CMBlockBuffer structure. */
  CFAllocatorRef bbufMemoryAllocator, /*! @param bbufMemoryAllocator Allocator to use for memory block held by the CMBlockBuffer. */
  uint32_t flags, /*! @param flags Flags controlling operation. */
  const AudioBufferList *bufferList) /*! @param bufferList Buffer list whose data will be copied into the new CMBlockBuffer. */


Ok, that's pretty simple right. We need a CMSampleBufferRef we can make using CMSampleBufferCreate and an AudioBufferList. Let’s assume we have an ABL that was used to render some audio data into which is all working fine.

Where can we go wrong?

Before we list all the different cases a -12731 can be returned from this API, by far the number one reason is that the numSamples parameter passed to CMSampleBufferCreate doesn't jibe with the AudioBuffers buffer size / the audio formats bytes per frame calculation.

For example, let's assume a mono PCM 32bit float audio format rendering 512 frames. The numSamples passed to CMSampleBufferCreate will be 512 along with an ASBD (as a CMAudioFormatDescriptionRef) that looks something like this:

Printing description of outBufASBD:
(AudioStreamBasicDescription) outBufASBD = {
  mSampleRate = 44100
  mFormatID = 1819304813 'lpcm'
  mFormatFlags = 41
  mBytesPerPacket = 4
  mFramesPerPacket = 1
  mBytesPerFrame = 4
  mChannelsPerFrame = 1
  mBitsPerChannel = 32
  mReserved = 0
}


Let's take a look at the ABL:


Printing description of self->outputBufferList->mBufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
  [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x0000000105006200)
}


mDataByteSize is 2048

mBytesPerFrame is 4

2048 / 4 == 512 == numSamples passed to CMSampleBufferCreate == we're cool!

With that out of the way, here's the list of "sanity checks" that will return -12731 if you're doing it wrong:

1) sbuf or bufferList are NULL.

2) The bufferLists mNumberBuffers does not match the expected number of buffers (1 if interleaved or asbd.mChannelsPerFrame).

3) A specific AudioBuffers number of channels (mNumberChannels) does not match the expected number of channels per buffer (asbd.mChannelsPerFrame if interleaved or 1) (all buffers are verified).

4) A specific AudioBuffers data byte size (mDataByteSize) is 0 (all buffers are verified).

5) A specific AudioBuffers data pointer (mData) is NULL (all buffers are verified).

6) The byte size (mDataByteSize) of the very first buffer is not consistent for all supplied buffers, in other words we expect all buffer sizes to be equal.

7) The CMSampleBuffer passed in was created with a NULL sampleTimingArray.

8) The CMSampleBuffer passed in has a numSampleTimingEntries that is not equal to 1 for PCM.

9) The CMSampleBuffers sampleTimingArray[0].duration entry is different from 1, asbd.mSampleRate for PCM.

And finally the most common failure...

10) The numSamples parameter passed to CMSampleBufferCreate does not match the AudioBuffers buffer size / the audio formats bytes per frame value.

Holy sanity checking Batman!

Holy cow, does that list of sanity checks ever need to be in the documentation. Thanks.

  • Since I just spent far too long, once again stuck with this error, for posterity it's worth noting that:

    For non-interleaved pcm audio the mBytesPerFrame value should be the number of bytes for a single channel. The header comment for mBytesPerFrame itself is not very clear, but you can see this in FillOutASBDForLPCM() in CoreAudioBaseTypes.h for example.

    This ends up failing check #10 above.

Add a Comment