I've been unable to get AVAudioEngine connect:to:format: to work when using AVAudioFormat initWithCommonFormat:AVAudioPCMFormatInt16. The method always produces a kAudioUnitErr_FormatNotSupportederror:ERROR: AVAudioNode.mm:521: AUSetFormat: error -10868*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'error -10868'What do I need to do to play a sound buffer in AVAudioPCMFormatInt16 format using AVAudioEngine? AVAudioEngine *engine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init];
[engine attachNode:player];
AVAudioFormat *format = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16
sampleRate:(double)22050.
channels:(AVAudioChannelCount)1
interleaved:NO];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:format
frameCapacity:(AVAudioFrameCount)1024];
buffer.frameLength = (AVAudioFrameCount)buffer.frameCapacity;
memset(buffer.int16ChannelData[0], 0, buffer.frameLength * format.streamDescription->mBytesPerFrame); // zero fill
AVAudioMixerNode *mainMixer = [engine mainMixerNode];
// The following line results in a kAudioUnitErr_FormatNotSupported -10868 error
[engine connect:player to:mainMixer format:buffer.format];
[player scheduleBuffer:buffer completionHandler:nil];
NSError *error;
[engine startAndReturnError:&error];
[player play];As background, my app needs to queue audio buffers generated by third party software to play sequentially. The audio buffers play fine (individually) using AVAudioPlayer. The AVAudioFormat settings in the above code come from inspecting the AVAudioPlayer settings property when playing a generated audio buffer. I am new to Core Audio and AVAudioEngine.
Post
Replies
Boosts
Views
Activity
To which session is "- While Yusef helped us add some style to our user interfaces." referring? It seems interesting and I'd like to watch the video.
In the Settings App, when I go to Accessibility, the only item listed in the SPEECH section is Live Speech. Can I create a personal voice on an iPad?
I am using an iPad 8th gen 32 GB, running iPadOS 17 beta 2 (17.0, 21A5268h)
My iPad 8th gen running IPadOS 17.0 RC does not list Personal Voice in the Accessibility settings. Is Personal Voice creation supported on iPad? If so, on which iPad models is it supported?
Very excited about the new eye tracking in iPadOS and iOS 18. Some general eye tracking questions.
Does the initial iPadOS 18 beta include eye tracking? If not, in which beta will it be included?
Do developers need to do anything to their app for users to control their app using eye tracking?
Will all standard UIKit and SwiftUI views and controls work with eye tracking without code changes?
Will custom subclasses of UIControl work with eye tracking without code changes?
Looking forward to testing eye tracking.
Is there a way for developers to generate IPA notation from user voice input like in the Settings app (Accessibility > VoiceOver > Speech > Pronunciations)?
Thought this might be a useful option for AAC apps.