Hi hotpaw2
I am using 8 KHz sample rate.
I am using Xamarin and C# since I am sharing the major part of the code with outher platforms (Android and PC).
Anyway I get the VoiceProcessingIO Audio Unit using this code:
AudioStreamBasicDescription audioFormat = new AudioStreamBasicDescription()
{
SampleRate = SAMPLERATE_8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
FramesPerPacket = 1,
ChannelsPerFrame = CHANNELS,
BitsPerChannel = BITS_X_SAMPLE,
BytesPerPacket = BYTES_X_SAMPLE,
BytesPerFrame = BYTES_X_FRAME,
Reserved = 0
};
AudioComponent audioComp = AudioComponent.FindComponent(AudioTypeOutput.VoiceProcessingIO);
AudioUnit.AudioUnit voiceProcessing = new AudioUnit.AudioUnit(audioComp);
AudioUnitStatus unitStatus = AudioUnitStatus.NoError;
unitStatus = voiceProcessing.SetEnableIO(true, AudioUnitScopeType.Input, ELEM_Mic);
if (unitStatus != AudioUnitStatus.NoError) ... // log error
unitStatus = voiceProcessing.SetEnableIO(false, AudioUnitScopeType.Output, ELEM_Speaker);
if (unitStatus != AudioUnitStatus.NoError) ... // log error
unitStatus = voiceProcessing.SetFormat(audioFormat, AudioUnitScopeType.Output, ELEM_Mic);
if (unitStatus != AudioUnitStatus.NoError) ... // log error
unitStatus = voiceProcessing.SetInputCallback(AudioUnit_InputCallback,AudioUnitScopeType.Output,ELEM_Mic);
if (unitStatus != AudioUnitStatus.NoError) ... // log error
// allocate audio buffer
uint maxSamplesXFrame = voiceProcessing.GetMaximumFramesPerSlice();
uint maxBytesXFrame = maxSamplesXFrame * BYTES_X_FRAME;
AudioBuffers audioBuffers = new AudioBuffers(1);
IntPtr micBuffer = Marshal.AllocHGlobal((int) maxBytesXFrame);
audioBuffers.SetData(0, micBuffer, (int) maxBytesXFrame);
voiceProcessing.Initialize();
voiceProcessing.Start();
The call back is:
private AudioUnitStatus AudioUnit_InputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioUnit.AudioUnit audioUnit)
{
AudioBuffers micBuffers;
....
// the bufferis filled by the captured frames (numberFrames)
audioUnit.Render(ref actionFlags, timeStamp, busNumber, numberFrames, micBuffers);
At this point the captured frames are send to the UDP connection.
I have also tried to change the prefered IO Buffer Duration. Currently we are requesting a 60 ms buffer duration
AVAudioSession audioSession = AVAudioSession.SharedInstance();
audioSession.SetPreferredIOBufferDuration(0.06, out error);
And if I change the prefered buffer duration then of course the "numberFrames" in the callback is different, but I got new error like the reported:
AUBase.cpp:1474:DoRender: /BuildRoot/Library/Caches/com.apple.xbs/Sources/CoreAudioServices/CoreAudioServices-975.2.5/CoreAudioUtility/Source/CADSP/AUPublic/AUBase/AUBase.cpp:1474 86 frames, 2 bytes/frame, expected 172-byte buffer; ioData.mBuffers[0].mDataByteSize=170; kAudio_ParamError
AUBase.cpp:1554:DoRender: from <private>, render err: -50
Where the frames number (e.g. 86 frames) is the same as the one in my callback. And the system is always reporting that 2 bytes are missing.
Furthermore note that 86 frames are only 10.7 ms. I was expecting about 480 frames == 60 ms at 8 kHz
Thanks for the help