I want to stream audio on iOS and use for that use-case the AVAudioEngine. So, currently I'm not really sure, what is the best solution for my problem.
I get the RTP data from the network and want playback this audio data with AVAudioEngine. I use the iOS Network.Framework to receive the network data. Then first I decode my voice data and want to playback it, now.
Here is my receive code:
connection.receiveMessage { (data, context, isComplete, error) in
if isComplete {
// decode the raw network data with Audio codec G711
let decodedData = AudioDecoder.decode(enc: data, frames: 160)
// create PCMBuffer for audio data for playback
let format = AVAudioFormat(settings: [AVFormatIDKey: NSNumber(value: kAudioFormatALaw), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1])
// let format = AVAudioFormat(standardFormatWithSampleRate: 8000, channels: 1)
let buffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: 160)!
buffer.frameLength = buffer.frameCapacity
// TODO: now I have to copy the decodedData --> buffer (AVAudioPCMBuffer)
if error == nil {
// recall receive() for next message
self.receive(on: connection)
}
}
?How I have to copy my decoded data into the AVAudioPCMBuffer? Currently, my AVAudioPCMBuffer is created, but not contain any audio data.
Background information: My generell approach would be to cash here in the above code (at ToDo-Line) the PCMBuffer in a collection and playback this collection by the AVAudioEngine (in a background thread).
My decoded linear data is cashed in an array from type Int16. So the var decodedData is from type [Int16], maybe there is a possibility to consume this data directly? The function scheduleBuffer allows only AVAudioPCMBuffer as input.