There's a difference between El Capitan and Sierra in created AVAudioCompressedBuffers. The mutableAudioBufferList's sole buffer has a zero-length mDataByteSize value on Sierra unlike El Capitan. This causes a problem when then attempting to pass this output buffer to an AVAudioConverter (-50 paramErr). It took several hours and assembly diving to track this down. So now 2015 WWDC Session 507's code in the slides is wrong. This is going to trip up a lot of people I'm sure.
Why was this changed? I don't understand the logic behind it.
Everthing else I inspect on the buffer is the same between El Cap and Sierra. This is confusing. And in addition to that, you can't simply modify the mDataByteSize value through the mutableAudioBufferList because when calling mutableAudioBufferList, it resets it back to zero. Soooo... I can't even figure out how to use it anymore. If it keeps resetting the data size to zero, and the AVAudioConverter (AudioConverterFillComplexBuffer underneath) throws paramErr whenever it's zero, then how is supposed to be used? The only way I can see of making this work is to cast the abl to be mutable and set its size like so:
UnsafeMutablePointer<AudioBufferList>(outputBuffer.audioBufferList).memory.mBuffers.mDataByteSize = UInt32(outputBuffer.packetCapacity) * UInt32(outputBuffer.maximumPacketSize)
static func AAC() -> AVAudioFormat {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100, mFormatID: kAudioFormatMPEG4AAC, mFormatFlags: 0,
mBytesPerPacket: 0, mFramesPerPacket: 0, mBytesPerFrame: 0,
mChannelsPerFrame: 2, mBitsPerChannel: 0, mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
let outputBuffer = AVAudioCompressedBuffer(format: avconv.outputFormat, packetCapacity: 8, maximumPacketSize: avconv.maximumOutputPacketSize)
print(outputBuffer.mutableAudioBufferList.memory)
// ---- El Capitan ----------------
// (AudioBufferList) $R1 = {
// mNumberBuffers = 1
// mBuffers = {
// mNumberChannels = 1
// mDataByteSize = 12288
// mData = 0x00000001020af400
// }
// }
//
// ---- Sierra --------------------
// (AudioBufferList) $R0 = {
// mNumberBuffers = 1
// mBuffers = {
// mNumberChannels = 1
// mDataByteSize = 0
// mData = 0x00000001018cac00
// }
// }
// Example usage which comes straight from WWDC slides and no longer works on Sierra
{
let avconv = AVAudioConverter(fromFormat: AVAudioFormat.processing(), toFormat: AVAudioFormat.AAC())
let outputBuffer = AVAudioCompressedBuffer(format: avconv.outputFormat, packetCapacity: 8, maximumPacketSize: avconv.maximumOutputPacketSize)
// On El Cap this was not required, but it seems the only way to make this work now.
// We must cast the non-mutable ABL because changes to the mutable one are overwritten each time mutableAudioBufferList is called.
// UnsafeMutablePointer<AudioBufferList>(outputBuffer.audioBufferList).memory.mBuffers.mDataByteSize = UInt32(outputBuffer.packetCapacity) * UInt32(outputBuffer.maximumPacketSize)
var nserror: NSError? = nil
let status = avconv.convertToBuffer(outputBuffer, error: &nserror) {
(requestedFrameCount: AVAudioPacketCount, inputStatus: UnsafeMutablePointer<AVAudioConverterInputStatus>) -> AVAudioBuffer? in
print("callback called")
inputStatus.memory = .NoDataNow
return nil
}
if status == .Error {
print(nserror!) /
} else {
print("No error")
}
}