Sometimes when I call AudioWorkIntervalCreate the call hangs with the following stacktrace. The call is made on the main thread.
mach_msg2_trap 0x00007ff801f0b3ce
mach_msg2_internal 0x00007ff801f19d80
mach_msg_overwrite 0x00007ff801f12510
mach_msg 0x00007ff801f0b6bd
HALC_Object_AddPropertyListener 0x00007ff8049ea43e
HALC_ProxyObject::HALC_ProxyObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff8047f97f2
HALC_ProxyObjectMap::_CreateObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff80490f69c
HALC_ProxyObjectMap::CopyObjectByObjectID(unsigned int) 0x00007ff80490ecd6
HALC_ShellPlugIn::_ReconcileDeviceList(bool, bool, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&) 0x00007ff8045d68cf
HALB_CommandGate::ExecuteCommand(void () block_pointer) const 0x00007ff80492ed14
HALC_ShellObject::ExecuteCommand(void () block_pointer) const 0x00007ff80470f554
HALC_ShellPlugIn::ReconcileDeviceList(bool, bool) 0x00007ff8045d6414
HALC_ShellPlugIn::ConnectToServer() 0x00007ff8045d74a4
HAL_HardwarePlugIn_InitializeWithObjectID(AudioHardwarePlugInInterface**, unsigned int) 0x00007ff8045da256
HALPlugInManagement::CreateHALPlugIn(HALCFPlugIn const*) 0x00007ff80442f828
HALSystem::InitializeDevices() 0x00007ff80442ebc3
HALSystem::CheckOutInstance() 0x00007ff80442b696
AudioObjectAddPropertyListener_mac_imp 0x00007ff80469b431
auoop::WorkgroupManager_macOS::WorkgroupManager_macOS() 0x00007ff8040fc3d5
auoop::gWorkgroupManager() 0x00007ff8040fc245
AudioWorkIntervalCreate 0x00007ff804034a33
AudioToolbox
RSS for tagRecord or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.
Posts under AudioToolbox tag
44 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hey all!
I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween)
When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio.
I wonder how recording in stereo audio works, are there any guides or documentation available for that?
Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently?
This is my Audio Session code:
func configureAudioSession(configuration: CameraConfiguration) throws {
ReactLogger.log(level: .info, message: "Configuring Audio Session...")
// Prevent iOS from automatically configuring the Audio Session for us
audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false
let enableAudio = configuration.audio != .disabled
// Check microphone permission
if enableAudio {
let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio)
if audioPermissionStatus != .authorized {
throw CameraError.permission(.microphone)
}
}
// Remove all current inputs
for input in audioCaptureSession.inputs {
audioCaptureSession.removeInput(input)
}
audioDeviceInput = nil
// Audio Input (Microphone)
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio input...")
guard let microphone = AVCaptureDevice.default(for: .audio) else {
throw CameraError.device(.microphoneUnavailable)
}
let input = try AVCaptureDeviceInput(device: microphone)
guard audioCaptureSession.canAddInput(input) else {
throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input"))
}
audioCaptureSession.addInput(input)
audioDeviceInput = input
}
// Remove all current outputs
for output in audioCaptureSession.outputs {
audioCaptureSession.removeOutput(output)
}
audioOutput = nil
// Audio Output
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio Data output...")
let output = AVCaptureAudioDataOutput()
guard audioCaptureSession.canAddOutput(output) else {
throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output"))
}
output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue)
audioCaptureSession.addOutput(output)
audioOutput = output
}
}
This is how I activate the audio session just before I start recording:
let audioSession = AVAudioSession.sharedInstance()
try audioSession.updateCategory(AVAudioSession.Category.playAndRecord,
mode: .videoRecording,
options: [.mixWithOthers,
.allowBluetoothA2DP,
.defaultToSpeaker,
.allowAirPlay])
if #available(iOS 14.5, *) {
// prevents the audio session from being interrupted by a phone call
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
}
if #available(iOS 13.0, *) {
// allow system sounds (notifications, calls, music) to play while recording
try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true)
}
audioCaptureSession.startRunning()
And this is how I set up the AVAssetWriter:
let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType)
let format = audioInput.device.activeFormat.formatDescription
audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format)
audioWriter!.expectsMediaDataInRealTime = true
assetWriter.add(audioWriter!)
ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.")
The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono.
Is there anything I'm missing here?
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage:
func startRecording()
{
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 22050,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings)
recorder?.delegate = self
recorder!.record()
recording = true
}
catch
{
recording = false
recordingFinished(success: false)
}
}
The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16
A subsequent attempt to load the file into AVAudioPlayer results in:
MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed
But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method.
I've also tried constructing the recorder with
let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1)
if audioFormat == nil
{
print("Audio format failed.")
}
else
{
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!)
...
with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
how do I add AudioToolbox on xcode 15.2?