how to use the AVAudioEngine playback of binary PCM data from the network

I want to put the binary PCM data obtained from the network into the AVAudioEngine every time, but when I get the data from the background and scheduleBuffer, the CPU consumption will be high, and there will be noise. My code is as follows, may I ask how to optimize my code


- (void)viewDidLoad {

[super viewDidLoad];


[self initWebSocket:server_ip];

self.engine = [[AVAudioEngine alloc] init];

self.playerNode = [[AVAudioPlayerNode alloc] init];


[self.engine attachNode:self.playerNode];


self.format = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32

sampleRate:(double)48000.0

channels:(AVAudioChannelCount)1

interleaved:NO];


AVAudioMixerNode *mainMixer = [self.engine mainMixerNode];

[self.engine connect:self.playerNode to:mainMixer format:self.format];

if (!self.engine.isRunning) {


[self.engine prepare];


NSError *error;


BOOL success;


success = [self.engine startAndReturnError:&error];


NSAssert(success, @"couldn't start engine, %@", [error localizedDescription]);

}


[self.playerNode play];

}

- (void)webSocket:(SRWebSocket *)webSocket didReceiveMessage:(id)message{


NSData *data = message;


AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc]

initWithPCMFormat:self.format

frameCapacity:(uint32_t)(data.length)

/self.format.streamDescription->mBytesPerFrame];


pcmBuffer.frameLength = pcmBuffer.frameCapacity;

[data getBytes:*pcmBuffer.floatChannelData length:data.length];


[self.playerNode scheduleBuffer:pcmBuffer completionHandler:nil];

}

It looks like the problem is here:


      frameCapacity:(uint32_t)(data.length)


The frame capacity is a number of frames, which are 4 bytes each (for float samples), but data.length is a number of bytes. You're asking for 4x too many frames in your AVAudioPCMBuffer!


It wouldn't really matter that you made the buffer too big, except for this:


    pcmBuffer.frameLength = pcmBuffer.frameCapacity;


You don't actually copy data.length frames, only data.length bytes:


    [data getBytes:*pcmBuffer.floatChannelData length:data.length];


So, most of your AVAudioPCMBuffer contains junk. That's where the "noise" comes from.

First of all, thank you very much for your reply. I adjusted it according to the official example . If I follow the official example to get the file size from the local one time and then play it, my voice data playback is normal, but if my voice The data is obtained from the network in real time. For example, every time the data with a length of 1024 is obtained from the network, then there will be noise. When I debug, I find that when I get the data from the network, the CPU usage will be higher than 80%. There will be noise, I do n’t know whether it is caused by the delay of the playback. After the data reception is completed, the playback will be normal. Is there any relevant documentation or examples to demonstrate how to use the AVAudioEngine to play real-time audio?

Hmm, I may have misread your original code, assuming you haven't edited it:


frameCapacity:(uint32_t)(data.length)
      /self.format.streamDescription->mBytesPerFrame];


So you are computing the number of frames.


Maybe you need to start looking at overall app performance using Instruments? It's hard to give any particular advice until you can narrow down the possibilities.

For real-time audio output, using Audio Units (RemoteIO on iOS devices) with a circular/ring buffer is likely to work more reliably than trying to schedule buffers on a playerNode. Audio Units callbacks run in a higher priority real-time thread, and schedule themselves. So you don't have to schedule, just make sure there's always something in the ring buffer to prevent glitches.

Hello, according to your prompt, I have successfully used AudioUnit to play PCM data, and the CPU usage rate is also very low, but I found that the memory usage will grow more severely. According to my understanding of ringbuffer, ringbuffer should be recycling the same memory. Well, my memory usage should be a repetitively fluctuating data, but my current memory usage is a continuous growth process. My current approach is to apply for an AudioBufferList, and then go back to AudioBufferList-> mBuffers [0] .mData by calling back the function. Below is my code. Is my understanding of ringbuffer wrong? Or is there a place where I need to manually release the memory? Of course, the speed of loading data on the network is much faster than the speed of my playback, but when I use Android's oboe to play, there will be no problem of continuous memory growth.


- (void)initPlayer {

NSError *error = nil;

OSStatus status = noErr;


// set audio session

AVAudioSession *audioSession = [AVAudioSession sharedInstance];

[audioSession setCategory:AVAudioSessionCategoryPlayback error:&error];


AudioComponentDescription audioDesc;

audioDesc.componentType = kAudioUnitType_Output;

audioDesc.componentSubType = kAudioUnitSubType_RemoteIO;

audioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;

audioDesc.componentFlags = 0;

audioDesc.componentFlagsMask = 0;


AudioComponent inputComponent = AudioComponentFindNext(NULL, &audioDesc);

AudioComponentInstanceNew(inputComponent, &audioUnit);


// buffer

buffList = (AudioBufferList *)malloc(sizeof(AudioBufferList));

buffList->mNumberBuffers = 1;

buffList->mBuffers[0].mNumberChannels = 1;

buffList->mBuffers[0].mDataByteSize = CONST_BUFFER_SIZE;

buffList->mBuffers[0].mData = malloc(CONST_BUFFER_SIZE);


//audio property

UInt32 flag = 1;

if (flag) {

status = AudioUnitSetProperty(audioUnit,

kAudioOutputUnitProperty_EnableIO,

kAudioUnitScope_Output,

OUTPUT_BUS,

&flag,

sizeof(flag));

}

if (status) {

NSLog(@"AudioUnitSetProperty error with status:%d", status);

}


// format

AudioStreamBasicDescription outputFormat;

memset(&outputFormat, 0, sizeof(outputFormat));

outputFormat.mSampleRate = 44100;

outputFormat.mFormatID = kAudioFormatLinearPCM;

outputFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;

outputFormat.mFramesPerPacket = 1;

outputFormat.mChannelsPerFrame = 1;

outputFormat.mBytesPerFrame = 2;

outputFormat.mBytesPerPacket = 2;

outputFormat.mBitsPerChannel = 16;

[self printAudioStreamBasicDescription:outputFormat];

status = AudioUnitSetProperty(audioUnit,

kAudioUnitProperty_StreamFormat,

kAudioUnitScope_Input,

OUTPUT_BUS,

&outputFormat,

sizeof(outputFormat));

if (status) {

NSLog(@"AudioUnitSetProperty eror with status:%d", status);

}



// callback

AURenderCallbackStruct playCallback;

playCallback.inputProc = PlayCallback;

playCallback.inputProcRefCon = (__bridge void *)self;

AudioUnitSetProperty(audioUnit,

kAudioUnitProperty_SetRenderCallback,

kAudioUnitScope_Input,

OUTPUT_BUS,

&playCallback,

sizeof(playCallback));



OSStatus result = AudioUnitInitialize(audioUnit);

NSLog(@"result %d", result);

}



static OSStatus PlayCallback(void *inRefCon,

AudioUnitRenderActionFlags *ioActionFlags,

const AudioTimeStamp *inTimeStamp,

UInt32 inBusNumber,

UInt32 inNumberFrames,

AudioBufferList *ioData)

{

LYPlayer *player = (__bridge LYPlayer *)inRefCon;

int buffSize = 2 * inNumberFrames;

while (buffSize > 0) {

if (player->webpp.m_list.empty()) {

memset(ioData->mBuffers[0].mData, 0, buffSize);

player->webpp.g_lock.unlock();

return noErr;

}

player->webpp.g_lock.lock();

std::string dataString = player->webpp.m_list.front();

player->webpp.m_list.pop_front();

int dataSize = std::min(buffSize, (int) dataString.size());

memcpy(ioData->mBuffers[0].mData, dataString.data(), dataSize);

if (dataString.size() > buffSize) {

player->webpp.m_list.emplace_front(dataString.substr(

dataSize,

dataString.size() - dataSize));

}

player->webpp.g_lock.unlock();

buffSize -= dataSize;

}

return noErr;

}

how to use the AVAudioEngine playback of binary PCM data from the network
 
 
Q