Hi everyone!
I'm developing an application that ingest raw AAC data directly from a socket. So I have "packets" in ADTS format extracted from a socket. I have been trying to convert this compressed audio packets to PCM so I can enqueue and play them. I have developed the PCM player but I have been unable to convert to PCM. I have been trying different ways but I cannot accomplish this. I'm using AudioConverterFillComplexBuffer
but I'm always receiving some kind of error which I cannot solve. So far, I'm in a situation where the logs shows something like:
2018-07-08 21:27:51.359716+0200 streaming-test-v3[43319:14013195] AACDecoder.cpp:189:Deserialize: Too few bits left in input buffer
2018-07-08 21:27:51.360047+0200 streaming-test-v3[43319:14013195] AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
2018-07-08 21:27:51.360393+0200 streaming-test-v3[43319:14013195] [ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x7ff4f705b240) Error decoding packet 54: err = -1, packet length: 1
2018-07-08 21:27:51.360691+0200 streaming-test-v3[43319:14013195] [ac] ACMP4AACBaseDecoder.cpp:1346:ProduceOutputBufferList: 'A0'
AudioConverterFillComplexBuffer error: 1852797029
I have a lot of doubts regarding this methods:
- Should I skip the ADTS header? (7-9 bytes)
- What should be the outputBuffer length and allocation capacity? How can I set it? I don't know which size the decoded frames gonna have
- Why is it saying too few bits left in input buffer?
- I'm streaming a single channel stream, but I had way some problems with this streaming a raw PCM and playing it.
I'm posting my code in case someone can help me.
Setup the audio converter
func setupAudioConverter() {
var outputFormat = AudioStreamBasicDescription.init(
mSampleRate: 44100,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: 2,
mFramesPerPacket: 1,
mBytesPerFrame: 2,
mChannelsPerFrame: 1,
mBitsPerChannel: 16,
mReserved: 0)
// let outputFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat, sampleRate: 44100.0, channels: 1, interleaved: false)
var inputFormat = AudioStreamBasicDescription.init(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: UInt32(MPEG4ObjectID.AAC_LC.rawValue),
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
// let inputFormat = AVAudioFormat(streamDescription: &inputDesc)
let status: OSStatus = AudioConverterNew(&inputFormat, &outputFormat, &audioConverter)
if (status != 0) {
print("setup converter error, status: \(status)")
}
print("audioConverter: \(audioConverter)")
}
2. This is the input callback for FillComplexBuffer
var inputDataVar: AudioConverterComplexInputDataProc = {(
aAudioConverter: AudioConverterRef,
aNumDataPackets: UnsafeMutablePointer<UInt32>,
aData: UnsafeMutablePointer<AudioBufferList>,
aPacketDesc: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?,
aUserData: UnsafeMutableRawPointer?) -> OSStatus in
var userData = UnsafeMutablePointer<PassthroughUserData>(OpaquePointer(aUserData)!).pointee
if userData.mDataSize == 0 {
aNumDataPackets.pointee = 0
return -9078
}
print("aUserData: \(aUserData)")
print("UserData: \(userData)")
if aPacketDesc != nil {
userData.mPacket.mStartOffset = 0
userData.mPacket.mVariableFramesInPacket = 0
userData.mPacket.mDataByteSize = userData.mDataSize
aPacketDesc?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>(&userData.mPacket)
}
UnsafeMutablePointer<AudioBufferList>(OpaquePointer(aData)!).pointee.mBuffers.mNumberChannels = userData.mChannels
UnsafeMutablePointer<AudioBufferList>(OpaquePointer(aData)!).pointee.mBuffers.mDataByteSize = userData.mDataSize
UnsafeMutablePointer<AudioBufferList>(OpaquePointer(aData)!).pointee.mBuffers.mData = userData.mData
userData.mDataSize = 0
return noErr
}
3. And the decode frame function
func decodeAudioFrame(frame: Data) {
var frameCopy = frame
if audioConverter == nil {
self.setupAudioConverter()
}
var ***1: UInt32 = UInt32(frame.count)
var ***2 = 0
var prop = AudioConverterGetProperty(audioConverter!, kAudioConverterPropertyMaximumOutputPacketSize, &***1, &***2)
let packetDescription: AudioStreamPacketDescription = AudioStreamPacketDescription.init(mStartOffset: 0, mVariableFramesInPacket: 0, mDataByteSize: UInt32(frameCopy.count))
var userData: PassthroughUserData = PassthroughUserData(mChannels: 1, mDataSize: UInt32(frame.count), mData: &frameCopy, mPacket: packetDescription)
let buffer = UnsafeMutablePointer<Int16>.allocate(capacity: frameCopy.count)
let audioBuffer: AudioBuffer = AudioBuffer.init(mNumberChannels: 1, mDataByteSize: UInt32(MemoryLayout.size(ofValue: buffer)), mData: buffer)
var decBuffer: AudioBufferList = AudioBufferList.init(mNumberBuffers: 1, mBuffers: audioBuffer)
var outPacketDescription: AudioStreamPacketDescription? = AudioStreamPacketDescription.init()
memset(&outPacketDescription, 0, MemoryLayout.size(ofValue: outPacketDescription))
var numFrames: UInt32 = 1
let status = AudioConverterFillComplexBuffer(
audioConverter!,
inputDataVar,
&userData,
&numFrames,
&decBuffer,
&outPacketDescription!)
if status != 0 {
print("AudioConverterFillComplexBuffer error: \(status)")
}
print("numFrames: \(numFrames)")
}
Thank you everyone. In fact, I hope @theAnalogKid can give some light in my path, because this is getting hard.