Posts

Post not yet marked as solved
0 Replies
485 Views
I'm encountering a strange and intermittent error thrown by AVAudioFile.write(from:) — the error domain is com.apple.coreaudio.avfaudio and the code is -40. According to this thread, this error means The Audio Buffer has Become Miscoded, though I'm not sure what this actually signifies. This error manifests while my app is rendering to audio from an offline AVAudioEngine to a file, which happens on a serial dispatch queue. The error is not deterministic — most of the time it doesn't happen at all, and when it does, my app automatically retries and most often the rendering succeeds after one or more retries. Occasionally the error keeps coming up until the app is terminated and relaunched. The file I am writing to is created using the fileFormat.settings of an audio file I am reading from, and the PCM buffer is created using the processingFormat of that file, so they should always be compatible. In practice the format is always the same (stereo deinterleaved PCM 192kHz 32-bit float in a CAF container). I have noticed that since iOS 16.0 there seems to be a bug affecting writing with AVAudioFile which is that the framePosition property of the file being written to does not update immediately upon calling AVAudioFile.write(from:) to reflect the most recent write. I have stopped relying on that property when writing to files, but I wonder if these issues are somehow related ... (possibly, but not necessarily...) Any help or advice much appreciated. ~ Milo
Posted
by Nutter.
Last updated
.
Post not yet marked as solved
1 Replies
1.3k Views
I'd like to use AVAudioConverter to convert audio captured from the microphone to μLaw.Unfortunately, when I try to create an output buffer to convert into, I get an exception. I've tried both AVAudioPCMBuffer and AVAudioCompressedBuffer, and neither works for me.Is this supposed to work?Thanks!let format = AVAudioFormat(settings: [AVFormatIDKey: NSNumber(value: kAudioFormatULaw), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1]) let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 1000) // required condition is false: isPCMFormat let buffer = AVAudioCompressedBuffer(format: format, packetCapacity: 1000) // required condition is false: !(fmt.IsPCM() || fmt.mFormatID == kAudioFormatALaw || fmt.mFormatID == kAudioFormatULaw)
Posted
by Nutter.
Last updated
.
Post not yet marked as solved
0 Replies
439 Views
Hi,I have a couple of doubts about the API for installing a tap on an AVAudioEngine node.1) Does the timestamp on the callback refer to the time at which the buffer starts? And is it right that I should compensate for input lag if I want to calculate the host time corresponding to these samples as accurately as possible? At the moment I'm doing this calcuation to calculate the host time corresponding to the first sample in the buffer:let inputLatency = AVAudioSession.sharedInstance().inputLatency input.installTap(onBus: 0, bufferSize: sampleRate / 10, format: nil) { buffer, timestamp in let bufferStart = AVAudioTime.seconds(forHostTime: timestamp.hostTime) - inputLatency }2) The documentation states that the callback may happen off the main thread — in practice this always seems to be the case. Could there be any negative consequences to performing signal processing on the thread on which the callback occurs? Or is it essentially a serial queue set up just for this tap? Obviously the safest thing would be to dispatch straight away to my own context, but is that actually necessary in practice?Thanks!~Milo
Posted
by Nutter.
Last updated
.