Get Mic data callbacks for 20 Miliseconds

I am developing VOIP calling app so now I am in the stage where I need to transfer the voice data to the server. For that I want to get Real time audio voice data from mic with 20 mili Seconds callbacks.

I did searched many links but I am unable find solution as
i am new to audio frameworks.


Details

We have our own stack like WebRTC which gives RTP sends data from remote for every 20 mili second and asks data from Mic for 20 mili second , What I am trying to achieve is to get 20 mili second data from mic and pass it the same to the stack. So need to know how to do so. Audio format is pcmFormatInt16 and sample rate is 8000 Hz with 20 mili seconds data.


I have searched for


AVAudioEngine

AUAudioUnit

AVCaptureSession Etc.


I am Using AVAudioSession and AUAudioUnit but setPreferredIOBufferDuration of audioSession is not setting with exact value what i have set. In result of that i am not getting the exact data size. Can anybody help me on setPreferredIOBufferDuration.



let hwSRate = audioSession.sampleRate

try audioSession.setActive(true)

print("native Hardware rate : \(hwSRate)")

try audioSession.setPreferredIOBufferDuration(preferredIOBufferDuration)

try audioSession.setPreferredSampleRate(8000) // at 8000.0 Hz

print("Changed native Hardware rate : \(audioSession.sampleRate) buffer duration \(audioSession.ioBufferDuration)")


Log:-

Changed native Hardware rate : 8000.0 buffer duration 0.01600000075995922


One more issue is auAudioUnit.outputProvider () is giving inputData in UnsafeMutableAudioBufferListPointer. inputData list has two element and I want only one sample. Can anybody help me on that to change it into data format which can be played in AVAudioPlayer.



try auAudioUnit = AUAudioUnit(componentDescription: self.audioComponentDescription)


auAudioUnit.outputProvider = { // AURenderPullInputBlock

(actionFlags, timestamp, frameCount, inputBusNumber, inputData) -> AUAudioUnitStatus in

let inputDataPtr = UnsafeMutableAudioBufferListPointer(inputData)

let mBuffers : AudioBuffer = inputDataPtr[0]

}

Replies

My apps just use a circular buffer for audio samples, and just wait the buffer receives enough 512 (or whatever) size sample blocks from RemoteIO to fill the circular buffer with a total of 20 mS or more (or whatever amount is needed) of audio samples. Then the app can process or send the amount required by consuming from the circular buffer in another thread (so as not to block the real-time audio process).

I have implemented the same way but now i am facing noise on speaker side for my voip app. The codes are below:-


auAudioUnit?.outputProvider = { ( // AURenderPullInputBlock?

actionFlags,

timestamp,

frameCount,

inputBusNumber,

inputDataList ) -> AUAudioUnitStatus in

if let block = self.renderBlock {

// print("Action Flag: \(actionFlags) and timeStamp: \(timestamp) and inputBusNumber: \(inputBusNumber) and dataListBuffersCount:\(inputDataList[0].mBuffers)")

if let dataReceived = self.getInputDataForConsumption() {

let mutabledata = NSMutableData(data: dataReceived)

var bufferListSpeaker = AudioBufferList(

mNumberBuffers: 1,

mBuffers: AudioBuffer(

mNumberChannels: 1,

mDataByteSize: 0,

mData: nil))


let err : OSStatus = block(actionFlags,

timestamp,

frameCount,

1,

&bufferListSpeaker,

.none)

if err == noErr {

print("Output Provider before Buffer size \(inputDataList[0].mBuffers.mDataByteSize) and at Index 1 \(inputDataList[1].mBuffers.mDataByteSize)")

bufferListSpeaker.mBuffers.mDataByteSize = UInt32(mutabledata.length)

bufferListSpeaker.mBuffers.mData = mutabledata.mutableBytes

inputDataList[0] = bufferListSpeaker

#if DEBUG

print("Output Provider mDataByteSize: \(inputDataList[0].mBuffers.mDataByteSize) and data Size: \(mutabledata.length) output FrameCount: \(frameCount)")

#endif

return err

} else {

#if DEBUG

print("Output Provider \(err)")

#endif

return err

}

}

}

return 0

}