In the AudioServicesPlaySystemSound function of AudioToolbox, you can enter the corresponding SystemSoundID to play some sound effects that come with the system. However, I can't be sure what sound effect each number corresponds to, so I want to know all the sound effects in visionOS and its corresponding SystemSoundID.
AudioToolbox
RSS for tagRecord or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.
Posts under AudioToolbox tag
34 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Vision pro cannot capture recordings over 16kHz to 24kHz at a sampling rate of 48kHz, why? Or can you tell me how to configure it?Vision pro cannot capture recordings over 16kHz to 24kHz at a sampling rate of 48kHz, why? Or can you tell me how to configure it?
在48kHz的采样率下,Vision pro无法捕获超过16khz到24khz的记录数据,为什么?或者你能告诉我如何配置吗?
Hi,
we have multiple threads in our CoreAudio server plugin carrying out necessary asynchronous work (namely handling USB callbacks and shuffling the required data to the IO).
Although these threads have been set up with the appropriate THREAD_TIME_CONSTRAINT_POLICY (which actually improves it) - on M* processors there is an extremely high, non-realtime amount of jitter of >10ms(!)
Now either the runloop notification from the USB stack comes that late or the thread driving the runloop hasn't been set up to correctly handling the callbacks in a timely manner.
Since AudioUnits threads requiring to comply to the frame deadlines can join the workgroup of the audio device is there a similar opportunity for the CoreAudio server plugin threads? And if so, how should these correctly be set up?
Thanks for any hints! Or pointing me to the docs :)
Hi,
I am getting into a trap. Please check stack-trace, howto fix this?
regards, Joël
stack-trace with ExtAudioFileWrite
When I connect my MacBook to my living room AirPort (older gen wallwart) via Music app, the music output in both rooms is synced.
When I try to setup a Multi-Output Device in AudioMidi setup, I'm not able to get them synced. I'm outputting to the same devices, they're all on the same sample rate, and I've played with the various settings (Primary Clock Source and Drift Sync). What gives? How are these connections different?
Intel MacBook Pro 2018 running Sonoma 14.5
Sometimes when I call AudioWorkIntervalCreate the call hangs with the following stacktrace. The call is made on the main thread.
mach_msg2_trap 0x00007ff801f0b3ce
mach_msg2_internal 0x00007ff801f19d80
mach_msg_overwrite 0x00007ff801f12510
mach_msg 0x00007ff801f0b6bd
HALC_Object_AddPropertyListener 0x00007ff8049ea43e
HALC_ProxyObject::HALC_ProxyObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff8047f97f2
HALC_ProxyObjectMap::_CreateObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff80490f69c
HALC_ProxyObjectMap::CopyObjectByObjectID(unsigned int) 0x00007ff80490ecd6
HALC_ShellPlugIn::_ReconcileDeviceList(bool, bool, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&) 0x00007ff8045d68cf
HALB_CommandGate::ExecuteCommand(void () block_pointer) const 0x00007ff80492ed14
HALC_ShellObject::ExecuteCommand(void () block_pointer) const 0x00007ff80470f554
HALC_ShellPlugIn::ReconcileDeviceList(bool, bool) 0x00007ff8045d6414
HALC_ShellPlugIn::ConnectToServer() 0x00007ff8045d74a4
HAL_HardwarePlugIn_InitializeWithObjectID(AudioHardwarePlugInInterface**, unsigned int) 0x00007ff8045da256
HALPlugInManagement::CreateHALPlugIn(HALCFPlugIn const*) 0x00007ff80442f828
HALSystem::InitializeDevices() 0x00007ff80442ebc3
HALSystem::CheckOutInstance() 0x00007ff80442b696
AudioObjectAddPropertyListener_mac_imp 0x00007ff80469b431
auoop::WorkgroupManager_macOS::WorkgroupManager_macOS() 0x00007ff8040fc3d5
auoop::gWorkgroupManager() 0x00007ff8040fc245
AudioWorkIntervalCreate 0x00007ff804034a33
Hey all!
I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween)
When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio.
I wonder how recording in stereo audio works, are there any guides or documentation available for that?
Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently?
This is my Audio Session code:
func configureAudioSession(configuration: CameraConfiguration) throws {
ReactLogger.log(level: .info, message: "Configuring Audio Session...")
// Prevent iOS from automatically configuring the Audio Session for us
audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false
let enableAudio = configuration.audio != .disabled
// Check microphone permission
if enableAudio {
let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio)
if audioPermissionStatus != .authorized {
throw CameraError.permission(.microphone)
}
}
// Remove all current inputs
for input in audioCaptureSession.inputs {
audioCaptureSession.removeInput(input)
}
audioDeviceInput = nil
// Audio Input (Microphone)
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio input...")
guard let microphone = AVCaptureDevice.default(for: .audio) else {
throw CameraError.device(.microphoneUnavailable)
}
let input = try AVCaptureDeviceInput(device: microphone)
guard audioCaptureSession.canAddInput(input) else {
throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input"))
}
audioCaptureSession.addInput(input)
audioDeviceInput = input
}
// Remove all current outputs
for output in audioCaptureSession.outputs {
audioCaptureSession.removeOutput(output)
}
audioOutput = nil
// Audio Output
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio Data output...")
let output = AVCaptureAudioDataOutput()
guard audioCaptureSession.canAddOutput(output) else {
throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output"))
}
output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue)
audioCaptureSession.addOutput(output)
audioOutput = output
}
}
This is how I activate the audio session just before I start recording:
let audioSession = AVAudioSession.sharedInstance()
try audioSession.updateCategory(AVAudioSession.Category.playAndRecord,
mode: .videoRecording,
options: [.mixWithOthers,
.allowBluetoothA2DP,
.defaultToSpeaker,
.allowAirPlay])
if #available(iOS 14.5, *) {
// prevents the audio session from being interrupted by a phone call
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
}
if #available(iOS 13.0, *) {
// allow system sounds (notifications, calls, music) to play while recording
try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true)
}
audioCaptureSession.startRunning()
And this is how I set up the AVAssetWriter:
let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType)
let format = audioInput.device.activeFormat.formatDescription
audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format)
audioWriter!.expectsMediaDataInRealTime = true
assetWriter.add(audioWriter!)
ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.")
The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono.
Is there anything I'm missing here?
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage:
func startRecording()
{
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 22050,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings)
recorder?.delegate = self
recorder!.record()
recording = true
}
catch
{
recording = false
recordingFinished(success: false)
}
}
The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16
A subsequent attempt to load the file into AVAudioPlayer results in:
MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed
But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method.
I've also tried constructing the recorder with
let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1)
if audioFormat == nil
{
print("Audio format failed.")
}
else
{
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!)
...
with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
I'm trying to add a USB mic to my Mini runing the latest Sonoma software but it full of crackles. Why isn't it clean?
I'm developing an iOS application that uses Core Audio. When I'm running the app on Silicon Macbook, the first time I call AudioUnitSetProperty the following error is logged:
CARP violation: using HAL semantics (AUIOImpl_Base)
Are others getting this, and is this part of normal process?
I'm also getting AQMEIO_HAL.cpp:862 kAudioDevicePropertyMute returned err 2003332927 when I set kAudioOutputUnitProperty_EnableIO for input.
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer.
Let's say there are instances A and B of the Custom Player class.
When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated.
B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower.
At the same time when A is terminated, B can complete the task without any impact on B.
What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues.
let audioMix = AVMutableAudioMix()
var audioMixParameters: [AVMutableAudioMixInputParameters] = []
try composition.tracks(withMediaType: .audio).forEach { track in
let inputParameter = AVMutableAudioMixInputParameters(track: track)
inputParameter.trackID = track.trackID
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: UnsafeMutableRawPointer(
Unmanaged.passRetained(clientInfo).toOpaque()
),
init: { tap, clientInfo, tapStorageOut in
tapStorageOut.pointee = clientInfo
},
finalize: { tap in
Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release()
},
prepare: nil,
unprepare: nil,
process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in
var timeRange = CMTimeRange.zero
let status = MTAudioProcessingTapGetSourceAudio(tap,
numberFrames,
bufferListInOut,
flagsOut,
&timeRange,
numberFramesOut)
if noErr == status {
....
}
})
var tap: Unmanaged<MTAudioProcessingTap>?
let status = MTAudioProcessingTapCreate(kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap)
guard noErr == status else {
return
}
inputParameter.audioTapProcessor = tap?.takeUnretainedValue()
audioMixParameters.append(inputParameter)
tap?.release()
}
audioMix.inputParameters = audioMixParameters
return audioMix
how do I add AudioToolbox on xcode 15.2?
Generate tones, etc. in code? Is there a way?
Regards, Patrick