I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.
I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.
- (void)copyBuffer:(CMSampleBufferRef)buf {
if (_readyForMoreBytes == NO)
{
return;
}
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);
BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);
if (!bytesCopied){
/
_readyForMoreBytes = NO;
if (size > kRescueBufferSize){
NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame");
} else {
if (rescueBuffer == nil) {
rescueBuffer = malloc(kRescueBufferSize);
}
rescueBufferSize = size;
memcpy(rescueBuffer, abl.mBuffers[0].mData, size);
}
}
CFRelease(blockBuffer);
if (!self.hasBuffer && bytesCopied > 0)
{
self.hasBuffer = YES;
}
}
Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731).
-(CMSampleBufferRef)processOutput
{
if(self.offline == NO)
{
return NULL;
}
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
UInt32 channelCount = 2;
AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(SInt32);
buffer.mData = calloc(numberFrames,sizeof(SInt32));
bufferList->mBuffers[j] = buffer;
}
CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit");
CMSampleBufferRef sampleBufferRef = NULL;
CMFormatDescriptionRef format = NULL;
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
AudioStreamBasicDescription audioFormat = self.audioFormat;
CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), @"CMAudioFormatDescriptionCreate");
CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), @"CMSampleBufferCreate");
CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList");
return sampleBufferRef;
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
int numberOfChannels = ioData->mBuffers[0].mNumberChannels;
SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;
/
memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);
MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;
if (p.hasBuffer){
int32_t availableBytes;
SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);
int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;
int bytesToRead = MIN(availableBytes, requestedBytesSize);
memcpy(outSample, bufferTail, bytesToRead);
TPCircularBufferConsume([p getBuffer], bytesToRead);
if (availableBytes <= requestedBytesSize*2){
[p setReadyForMoreBytes];
}
if (availableBytes <= requestedBytesSize) {
p.hasBuffer = NO;
}
}
return noErr;
}
The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)
CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180
invalid = NO
dataReady = NO
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc2c
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 512
sampleTimingArray[1] = {
{PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
}
dataBuffer = 0x0
The buffer list looks like this
Printing description of bufferList:
(AudioBufferList *) bufferList = 0x00007f87d280b0a0
Printing description of bufferList->mNumberBuffers:
(UInt32) mNumberBuffers = 2
Printing description of bufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
[0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)
}
Really at a loss here, hoping someone can help. Thanks,
In case it matters i am debuggin this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.
The kCMSampleBufferError_RequiredParameterMissing error -12731 when returned from CMSampleBufferSetDataBufferFromAudioBufferList can be quite perplexing. Why? Well, because this one generic error is returned in a number of different cases of "sanity check" failures and much like the historic paramErr -50 leaves much to the imagination.
CMSampleBufferSetDataBufferFromAudioBufferList is described in CMSampleBuffer.h as follows:
"Creates a CMBlockBuffer containing a copy of the data from the AudioBufferList, and sets that as the CMSampleBuffer's data buffer. The resulting buffer(s) in the sample buffer will be 16-byte-aligned if kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment is passed in."
OSStatus CMSampleBufferSetDataBufferFromAudioBufferList(
CMSampleBufferRef sbuf, /*! @param sbuf CMSampleBuffer being modified. */
CFAllocatorRef bbufStructAllocator, /*! @param bbufStructAllocator Allocator to use when creating the CMBlockBuffer structure. */
CFAllocatorRef bbufMemoryAllocator, /*! @param bbufMemoryAllocator Allocator to use for memory block held by the CMBlockBuffer. */
uint32_t flags, /*! @param flags Flags controlling operation. */
const AudioBufferList *bufferList) /*! @param bufferList Buffer list whose data will be copied into the new CMBlockBuffer. */
Ok, that's pretty simple right. We need a CMSampleBufferRef we can make using CMSampleBufferCreate and an AudioBufferList. Let’s assume we have an ABL that was used to render some audio data into which is all working fine.
Where can we go wrong?
Before we list all the different cases a -12731 can be returned from this API, by far the number one reason is that the numSamples parameter passed to CMSampleBufferCreate doesn't jibe with the AudioBuffers buffer size / the audio formats bytes per frame calculation.
For example, let's assume a mono PCM 32bit float audio format rendering 512 frames. The numSamples passed to CMSampleBufferCreate will be 512 along with an ASBD (as a CMAudioFormatDescriptionRef) that looks something like this:
Printing description of outBufASBD:
(AudioStreamBasicDescription) outBufASBD = {
mSampleRate = 44100
mFormatID = 1819304813 'lpcm'
mFormatFlags = 41
mBytesPerPacket = 4
mFramesPerPacket = 1
mBytesPerFrame = 4
mChannelsPerFrame = 1
mBitsPerChannel = 32
mReserved = 0
}
Let's take a look at the ABL:
Printing description of self->outputBufferList->mBufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
[0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x0000000105006200)
}
mDataByteSize is 2048
mBytesPerFrame is 4
2048 / 4 == 512 == numSamples passed to CMSampleBufferCreate == we're cool!
With that out of the way, here's the list of "sanity checks" that will return -12731 if you're doing it wrong:
1) sbuf or bufferList are NULL.
2) The bufferLists mNumberBuffers does not match the expected number of buffers (1 if interleaved or asbd.mChannelsPerFrame).
3) A specific AudioBuffers number of channels (mNumberChannels) does not match the expected number of channels per buffer (asbd.mChannelsPerFrame if interleaved or 1) (all buffers are verified).
4) A specific AudioBuffers data byte size (mDataByteSize) is 0 (all buffers are verified).
5) A specific AudioBuffers data pointer (mData) is NULL (all buffers are verified).
6) The byte size (mDataByteSize) of the very first buffer is not consistent for all supplied buffers, in other words we expect all buffer sizes to be equal.
7) The CMSampleBuffer passed in was created with a NULL sampleTimingArray.
8) The CMSampleBuffer passed in has a numSampleTimingEntries that is not equal to 1 for PCM.
9) The CMSampleBuffers sampleTimingArray[0].duration entry is different from 1, asbd.mSampleRate for PCM.
And finally the most common failure...
10) The numSamples parameter passed to CMSampleBufferCreate does not match the AudioBuffers buffer size / the audio formats bytes per frame value.
Holy sanity checking Batman!