AudioToolbox

RSS for tag

Record or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.

Posts under AudioToolbox tag

43 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to request permission for System Audio Recording Only?
Hi community, I'm wondering how can I request the permission of "System Audio Recording Only" under the Privacy & Security -> Screen & System Audio Recording via swift? Did a bunch of search but didn't find good documentation on it. Tried another approach here https://github.com/insidegui/AudioCap/blob/main/AudioCap/ProcessTap/AudioRecordingPermission.swift which doesn't work very reliably.
1
0
206
2w
Using AudioToolbox to capture audio
Hi all, I am developing a digital signal processing application using AudioToolbox to capture audio from an audio loop application (BlackHole). Environment: MacOS Sonoma 14.4.1 Xcode 15.4 Quicktime 10.5 (I also tested with JRive Media Center) BlackHole 2ch and 16ch Problem: All audio samples received are zero. Steps to recreate: Set Mac Settings Sound audio output to BlackHole 2ch. Set Mac Settings Sound audio input to BlackHole 2ch. Authorise Xcode to access Microphone. In Audio MIDI set "Use this device for sound input" and "Use this device for sound output". Set volume of both to 1.0 . Play a 44.1 16-bit signed integer stereo FLAC file using Quicktime. Start C++ application . Key details of my code below... AudioStreamBasicDescription asbd = { 0 }; asbd.mFormatID = kAudioFormatLinearPCM; asbd.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked; asbd.mSampleRate = 48000; asbd.mBitsPerChannel = 32; asbd.mBytesPerFrame = 8; asbd.mChannelsPerFrame = 2; asbd.mBytesPerPacket = asbd.mBytesPerFrame; asbd.mFramesPerPacket = 1; status = AudioQueueNewInput(&asbd, read_audio_callback, &userdata, NULL, NULL, 0, &queue_ref); for (uint8_t b = 0; b < num_buffers; b++) { AudioQueueBufferRef buf_ref; status = AudioQueueAllocateBuffer(queue_ref, audio_buf_size, &buf_ref); printf("Allocate buffer status: %d length %d\n", status, buf_ref->mAudioDataByteSize); status = AudioQueueEnqueueBuffer (queue_ref, buf_ref, 0, NULL); printf ("Initial Enqueue Buffer status: %d\n", status); } status = AudioQueueStart(queue_ref, NULL); Here is my callback: void read_audio_callback(void * ptr, AudioQueueRef queue_ref, AudioQueueBufferRef buf_ref, const AudioTimeStamp * ts_not_used, uint32_t num_packets, const AudioStreamPacketDescription * aspd_not_used) { if (num_packets > 0) { uint32_t bytesize = buf_ref -> mAudioDataByteSize; float * sample_buf_float = (float *)buf_ref -> mAudioData; float data[bytesize / 4]; memcpy(data, sample_buf_float, bytesize); OSStatus status = AudioQueueEnqueueBuffer(queue_ref, buf_ref, 0, NULL); printf ("Enqueue buffer status: %d\n", status); printf("Buffer length %d Packets received %d\n", bytesize, num_packets); for (int j = 0; j < bytesize / 4; j++) { printf("%f",data[j]); } } printf("read_audio_callback called!\n"); } All calls to Apple Audio functions return status of 0. The samples in the buffer are all 0.0 . Why would this be the case? Also, my callback is called even when playback is stopped. num_packets is always > 0 . Appreciate any help. Thanks in advance, Geoff.
2
0
192
3w
Only Apple based music devices show view
The following is my playground code. Any of the apple audio units show the plugin view, however anything else (i.e. kontakt, spitfire, etc.) does not. It does not error, just where the visual is expected is blank. import AppKit import PlaygroundSupport import AudioToolbox import AVFoundation import CoreAudioKit let manager = AVAudioUnitComponentManager.shared() let description = AudioComponentDescription(componentType: kAudioUnitType_MusicDevice, componentSubType: 0, componentManufacturer: 0, componentFlags: 0, componentFlagsMask: 0) var deviceComponents = manager.components(matching: description) var names = deviceComponents.map{$0.name} let pluginName: String = "AUSampler" // This works //let pluginName: String = "Kontakt" // This does not var plugin = deviceComponents.filter{$0.name.contains(pluginName)}.first! print("Plugin name: \(plugin.name)") var customViewController:NSViewController? AVAudioUnit.instantiate(with: plugin.audioComponentDescription, options: []){avAudioUnit, error in var ilip = avAudioUnit!.auAudioUnit.isLoadedInProcess print("Loaded in process: \(ilip)") guard error == nil else { print("Error: \(error!.localizedDescription)") return } print("AudioUnit successfully created.") let audioUnit = avAudioUnit!.auAudioUnit audioUnit.requestViewController{ vc in if let viewCtrl = vc { customViewController = vc var b = vc?.view.bounds PlaygroundPage.current.liveView = vc print("Successfully added view controller.") }else{ print("Failed to load controller.") } } }
0
0
154
Dec ’24
AVAudioEngine Hangs/Locks Apps After Call to -connect:to:format:
Periodically when testing I am running into a situation where the app hangs and beach balls forever when using AVAudioEngine. This seems to log out when this affect happens: Now when this happens if I pause the debugger it's hanging at a call to: [engine connect:playerNode to:engine.mainMixerNode format:buffer.format]; #0 0x000000019391ca9c in __psynch_mutexwait () #1 0x0000000104d49100 in _pthread_mutex_firstfit_lock_wait () #2 0x0000000104d49014 in _pthread_mutex_firstfit_lock_slow () #3 0x00000001938928ec in std::__1::recursive_mutex::lock () #4 0x00000001ef80e988 in CADeprecated::RealtimeMessenger::_PerformPendingMessages () #5 0x00000001ef818868 in AVAudioNodeTap::Uninitialize () #6 0x00000001ef7fdc68 in AUGraphNodeBase::Uninitialize () #7 0x00000001ef884f38 in AVAudioEngineGraph::PerformCommand () #8 0x00000001ef88e780 in AVAudioEngineGraph::_Connect () #9 0x00000001ef8b7e70 in AVAudioEngineImpl::Connect () #10 0x00000001ef8bc05c in -[AVAudioEngine connect:to:format:] () Current all my audio engine related calls are on the main queue (though I am curious about this https://forums.developer.apple.com/forums/thread/123540?answerId=816827022#816827022). In any case, anyone know where I'm going wrong here?
6
0
322
Dec ’24
Are some backgrounded apps allowed to record phone calls but not others?
It’s been established that generally speaking background apps cannot record audio while the foreground app is already reading audio data from the microphone, but are there exceptions? For instance, is there an exception for certain Apple apps? If so, and there’s a special exception that most programmers don’t know about but some Apple’s engineers do and perhaps some hackers do as well, wouldn’t the mechanism that allows that eventually be exploited?
0
0
261
Dec ’24
Can backgrounded app record phone calls?
I'd like to know: Let's say there's a backgrounded app which has microphone access, such as Signal or SoundHound or Shazam. It's established that these apps are allowed to record audio in the user's environment even after being backgrounded, seemingly for as long as they want and even upload that sound data. But can they ALSO continue recording even while another app that is in the foreground is using the microphone, such as the Phone app or Signal?
1
0
243
Dec ’24
AUv3 recent "Failed to find component with type..." frequent issues
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...") In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails. If I "Archive" the app+plugin, there is no audio unit extension in the bundle. If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there. Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
2
1
277
6d
AudioServicesPlaySystemSound not playing through BluetoothA2DP device
Hello We have an application that play some sound via the system sound APIs from the AudioToolbox framework. AudioServicesCreateSystemSoundID(url as CFURL, &soundID) AudioServicesPlaySystemSoundWithCompletion(soundID) Our make sure that an active audio session is available before playing the system sound. But when the device is connected to a BluetoothA2DP device. The sound are played on through the device speaker and not through the bluetooth A2DP device. Our AudioSesison is configured with the following categories [.allowBluetooth, .defaultToSpeaker, .allowBluetoothA2DP] Sound played from the AVAudioPlayer are played on the allowBluetoothA2DP device with similar code. Is this a bug in the AudioToolbox framework?
2
0
221
Dec ’24
Intercepting incoming video and/or audio for stopping scams
I'm trying to make an app that is able to quietly run in the background. It needs to detect other apps' or the system's incoming video and/or audio, using only on-device resources to determine if it might be a scam caller. It will tap into an escalating cascade of resources to do so. For video/image scam detection, it uses OpenCV to detect faces, then refers to a known database of reported scam imagery. For audio scam calls, we defer to known techniques of voice modulation in frequency and/or amplitude. Each video and/or audio result will be relayed via notification banner as well as recorded in-app. Crucially, if the results are uncertain, users have the option to submit it to a global collaborative cloud database for investigative teams; 60 second audio snippets or series of images where faces were detected (60 second equivalent). In the end, we expect to deploy this app across most parts of Asia and Africa, thereby protecting generations of iPhone and iPad users. However, we have not been able to find a method that does this, and there is no known correspondance able to provide such technical guidance. Please assist.
0
0
177
Nov ’24
AVAudioEngine - How to archive configured nodes to file?
I’m looking to add DAW-like capabilities to my macOS music app, and AVAudioEngine seems like the right tool for the job. However, I haven’t been able to find any documentation on how to save the user’s AVAudioEngine configuration—specifically the connections between nodes and the internal states of each node—to a file. Does AVAudioEngine provide any API for saving and restoring this state, or does it need to be handled manually? If it’s manual, are there any sample "DAW" apps or resources that demonstrate how this can be implemented? Any guidance would be greatly appreciated. Thanks, BD
1
0
273
Dec ’24
arm64 Logic Leaking Plugins (Not Calling AP_Close)
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects. I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep). Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
1
1
305
Oct ’24
Does iPhone 15 Pro Use a Single Microphone or Multiple Microphones for Voice and Sound Recognition?
Hello, I have a question regarding the voice and sound recognition features on the iPhone 15 Pro. The iPhone 15 Pro is equipped with four microphones, and I understand that for features like Apple’s sound recognition and when invoking Siri, the microphone(s) must always be active. My question is whether the device uses a single microphone (mono channel) for these functions or if multiple microphones are activated simultaneously. I would appreciate clarification on how the microphones are utilized in sound and voice recognition features. Thank you for your assistance. Best regards.
1
0
516
Oct ’24
Does iPhone 15 Pro Use a Single Microphone or Multiple Microphones for Voice and Sound Recognition?
Hello, I have a question regarding the voice and sound recognition features on the iPhone 15 Pro. The iPhone 15 Pro is equipped with four microphones, and I understand that for features like Apple’s sound recognition and when invoking Siri, the microphone(s) must always be active. My question is whether the device uses a single microphone (mono channel) for these functions or if multiple microphones are activated simultaneously. I would appreciate clarification on how the microphones are utilized in sound and voice recognition features. Thank you for your assistance. Best regards.
1
0
417
Oct ’24
CMSAMPLEBuffer: audio PCM to MP4 AAC
Hello, As explained in this link, the AVAssetReaderTrackOutput.copyNextSampleBuffer() returns a CMSampleBuffer in linear PCM audio format. I want to place this audio buffer into an AVAssetWriterInput of type kAudioFormatMPEG4AAC, but I can't manage the conversion. Could you help me by providing an extension that returns a CMSampleBuffer converted from linear PCM audio format to kAudioFormatMPEG4AAC? Example: extension CMSampleBuffer { func fromPCMToAAC() -> CMSampleBuffer? { // Here, get a new AudioStreamBasicDescription, create a CMSampleBuffer and a CMBlockBuffer } } I've tried multiple times but without success. Software: iOS 18.1 XCode: 16.0 Thank you!
1
0
441
Oct ’24
AudioConverterFillComplexBuffer not working for (E)AC3 in tvOS 18
Since upgrading to tvOS 18, the above function isn't working for me in converting a stream with these formats. It does work in decoding AAC, however. https://developer.apple.com/documentation/audiotoolbox/1503098-audioconverterfillcomplexbuffer?language=objc I pass a valid ioOutputDataPacketSize in, but it always comes out as zero. Has anyone else observed this too? I wonder if this is related to the issue being discussed widely about 5.1 sound being broken for many people after upgrading to tvOS 18? https://discussions.apple.com/thread/255769102?login=true&sortBy=rank EDIT: further information; the callback gets called once, asking for 1 packet (which is ok). I give it one packet and return noErr. However, after this, the callback is never invoked again. Must be a bug? EDIT2: the same code continues to work correctly on macOS in decoding the same audio stream.
4
2
472
Nov ’24
BuildConverter: AudioConverterNew returned -50
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 16000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 2 ch, 16000 Hz, Int16, interleaved AQMEIO_HAL.cpp:2773 iOSSimulatorAudioDevice-15111-0: Abandoning I/O cycle because reconfig pending (1). HALC_ProxySystem.cpp:163 HALC_ProxySystem::GetObjectInfo: got an error from the server, Error: 560947818 (!obj) HALC_ShellObject.mm:213 HALC_ShellObject::HasProperty: there is no proxy object AudioHardware-mac-imp.cpp:1224 AudioObjectRemovePropertyListener: no object with given ID 160 HALSystem.cpp:2216 AudioObjectPropertiesChanged: no such object why? Can't record on ios17. Normal recording before iOS 16.
0
0
357
Sep ’24
ExtAudioFileRead throwing AVAudioSessionErrorCodeResourceNotAvailable error on iOS and iPadOS 18
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release. The following is also printed to the Xcode 16 console:
2
1
508
Sep ’24