Recording Audio Noice

Hi


I have problems recording audio for an VoIP app on iPhone 6s and iPhone 7. The same code is working OK with iPhone 6. All the devices are running the last iOS version (11.1.2).

I got a distortion/noice in the audio like a metallic voice.

I am using Audio Queue and the AudioStreamBasicDescription is started:

Format: LinearPCM

Flags: PCMIsSignedInterger, PCMIsPacked

FramesPerPacket: 1

ChannelsPerFrame: 1

BitsPerChannel: 16

BytesPerPacket: 2

BytesPerFrame: 2

Reserved: 0


I allocate and enqueue 3 buffers of 60 ms before stating. The buffer is enqueued again in the callback.


I have noted from the log in Xcode the following lines:

Dec 1 14:00:43 DEs-iPhone TetraFlexClientMobile(AudioToolbox)[644] <Notice>: AUBase.cpp:1474:DoRender: /BuildRoot/Library/Caches/com.apple.xbs/Sources/CoreAudioServices/CoreAudioServices-975.2.5/CoreAudioUtility/Source/CADSP/AUPublic/AUBase/AUBase.cpp:1474 86 frames, 2 bytes/frame, expected 172-byte buffer; ioData.mBuffers[0].mDataByteSize=170; kAudio_ParamError

Dec 1 14:00:43 DEs-iPhone TetraFlexClientMobile(AudioToolbox)[644] <Notice>: AUBase.cpp:1554:DoRender: from <private>, render err: -50


Can somebody give me some hits where I can search for the problem ?

Thanks


Best regards

Replies

What sample rate are you using? Are you respecting the exact number of frames requested by the Audio Unit callback (which can change)?

Hi hotpaw2


I am using 8 KHz sample rate.


I am using Xamarin and C# since I am sharing the major part of the code with outher platforms (Android and PC).

Anyway I get the VoiceProcessingIO Audio Unit using this code:


AudioStreamBasicDescription audioFormat = new AudioStreamBasicDescription()

{

SampleRate = SAMPLERATE_8000,

Format = AudioFormatType.LinearPCM,

FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,

FramesPerPacket = 1,

ChannelsPerFrame = CHANNELS,

BitsPerChannel = BITS_X_SAMPLE,

BytesPerPacket = BYTES_X_SAMPLE,

BytesPerFrame = BYTES_X_FRAME,

Reserved = 0

};


AudioComponent audioComp = AudioComponent.FindComponent(AudioTypeOutput.VoiceProcessingIO);

AudioUnit.AudioUnit voiceProcessing = new AudioUnit.AudioUnit(audioComp);


AudioUnitStatus unitStatus = AudioUnitStatus.NoError;


unitStatus = voiceProcessing.SetEnableIO(true, AudioUnitScopeType.Input, ELEM_Mic);

if (unitStatus != AudioUnitStatus.NoError) ... // log error


unitStatus = voiceProcessing.SetEnableIO(false, AudioUnitScopeType.Output, ELEM_Speaker);

if (unitStatus != AudioUnitStatus.NoError) ... // log error


unitStatus = voiceProcessing.SetFormat(audioFormat, AudioUnitScopeType.Output, ELEM_Mic);

if (unitStatus != AudioUnitStatus.NoError) ... // log error


unitStatus = voiceProcessing.SetInputCallback(AudioUnit_InputCallback,AudioUnitScopeType.Output,ELEM_Mic);

if (unitStatus != AudioUnitStatus.NoError) ... // log error


// allocate audio buffer

uint maxSamplesXFrame = voiceProcessing.GetMaximumFramesPerSlice();

uint maxBytesXFrame = maxSamplesXFrame * BYTES_X_FRAME;


AudioBuffers audioBuffers = new AudioBuffers(1);

IntPtr micBuffer = Marshal.AllocHGlobal((int) maxBytesXFrame);

audioBuffers.SetData(0, micBuffer, (int) maxBytesXFrame);


voiceProcessing.Initialize();

voiceProcessing.Start();



The call back is:

private AudioUnitStatus AudioUnit_InputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioUnit.AudioUnit audioUnit)

{

AudioBuffers micBuffers;

....


// the bufferis filled by the captured frames (numberFrames)

audioUnit.Render(ref actionFlags, timeStamp, busNumber, numberFrames, micBuffers);


At this point the captured frames are send to the UDP connection.

I have also tried to change the prefered IO Buffer Duration. Currently we are requesting a 60 ms buffer duration


AVAudioSession audioSession = AVAudioSession.SharedInstance();

audioSession.SetPreferredIOBufferDuration(0.06, out error);


And if I change the prefered buffer duration then of course the "numberFrames" in the callback is different, but I got new error like the reported:


AUBase.cpp:1474:DoRender: /BuildRoot/Library/Caches/com.apple.xbs/Sources/CoreAudioServices/CoreAudioServices-975.2.5/CoreAudioUtility/Source/CADSP/AUPublic/AUBase/AUBase.cpp:1474 86 frames, 2 bytes/frame, expected 172-byte buffer; ioData.mBuffers[0].mDataByteSize=170; kAudio_ParamError

AUBase.cpp:1554:DoRender: from <private>, render err: -50


Where the frames number (e.g. 86 frames) is the same as the one in my callback. And the system is always reporting that 2 bytes are missing.


Furthermore note that 86 frames are only 10.7 ms. I was expecting about 480 frames == 60 ms at 8 kHz


Thanks for the help