Hello everyone,
I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels.
Here's my program to demonstrate the issue:
// InputDeviceChannels.m
// Compile with:
// clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m
//
// On my system, this prints:
// Device Name: MacBook Pro Microphone
// Number of Channels (Stream Format): 1
// Number of Elements (Element Count): 2
#import <AudioToolbox/AudioToolbox.h>
#import <AudioUnit/AudioUnit.h>
#import <CoreAudio/CoreAudio.h>
#import <Foundation/Foundation.h>
void printDeviceInfo(AudioUnit audioUnit) {
UInt32 size;
OSStatus err;
AudioStreamBasicDescription streamFormat;
size = sizeof(streamFormat);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1,
&streamFormat, &size);
if (err != noErr) {
printf("Error getting stream format\n");
exit(1);
}
int numChannels = streamFormat.mChannelsPerFrame;
UInt32 elementCount;
size = sizeof(elementCount);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0,
&elementCount, &size);
if (err != noErr) {
printf("Error getting element count\n");
exit(1);
}
printf("Number of Channels (Stream Format): %d\n", numChannels);
printf("Number of Elements (Element Count): %d\n", elementCount);
}
void printDeviceName(AudioDeviceID deviceID) {
UInt32 size;
OSStatus err;
CFStringRef deviceName = NULL;
size = sizeof(deviceName);
err = AudioObjectGetPropertyData(
deviceID,
&(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain},
0, NULL, &size, &deviceName);
if (err != noErr) {
printf("Error getting device name\n");
exit(1);
}
char deviceNameStr[256];
if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr),
kCFStringEncodingUTF8)) {
printf("Error converting device name to C string\n");
exit(1);
}
CFRelease(deviceName);
printf("Device Name: %s\n", deviceNameStr);
}
int main(int argc, const char *argv[]) {
@autoreleasepool {
OSStatus err;
// Get the default input device ID
AudioDeviceID input_device_id = kAudioObjectUnknown;
{
UInt32 property_size = sizeof(input_device_id);
AudioObjectPropertyAddress input_device_property = {
kAudioHardwarePropertyDefaultInputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain,
};
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL,
&property_size, &input_device_id);
if (err != noErr || input_device_id == kAudioObjectUnknown) {
printf("Error getting default input device ID\n");
exit(1);
}
}
// Print the device name using the input device ID
printDeviceName(input_device_id);
// Open audio unit for the input device
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput,
kAudioUnitManufacturer_Apple, 0, 0};
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioUnit audioUnit;
err = AudioComponentInstanceNew(component, &audioUnit);
if (err != noErr) {
printf("Error creating AudioUnit\n");
exit(1);
}
// Enable IO for input on the AudioUnit and disable output
UInt32 enableInput = 1;
UInt32 disableOutput = 0;
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
1, &enableInput, sizeof(enableInput));
if (err != noErr) {
printf("Error enabling input on AudioUnit\n");
exit(1);
}
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, &disableOutput, sizeof(disableOutput));
if (err != noErr) {
printf("Error disabling output on AudioUnit\n");
exit(1);
}
// Set the current device to the input device
err =
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id));
if (err != noErr) {
printf("Error setting device for AudioUnit\n");
exit(1);
}
// Initialize AudioUnit
err = AudioUnitInitialize(audioUnit);
if (err != noErr) {
printf("Error initializing AudioUnit\n");
exit(1);
}
// Print device info
printDeviceInfo(audioUnit);
// Clean up
AudioUnitUninitialize(audioUnit);
AudioComponentInstanceDispose(audioUnit);
}
return 0;
}
It prints:
Device Name: MacBook Pro Microphone
Number of Channels (Stream Format): 1
Number of Elements (Element Count): 2
I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output.
Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs.
Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus.
How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
AudioUnit
RSS for tagCreate audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.
Posts under AudioUnit tag
30 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello macOS gurus, I am writing an AUv3 plug-in and wanted to add support for additional formats such as CLAP and VST3. These plug-ins must reside in an appropriate folder /Library/Audio/Plug-Ins/ or ~/Library/Audio/Plug-Ins/. The typical way these are delivered is with old school installers.
I have been experimenting with delivering theses formats in a sandboxed app. I was using the com.apple.security.temporary-exception.files.absolute-path.read-write entitlement to place a symlink in the system folder that points to my CLAP and VST3 plug-ins in the bundle. Everything was working very nicely until I realize that on my Mac I had changed the permissions on these folders from
to
The problem is that when the folder has the original system permissions, my attempt to place the symlink fails, even with the temporary exception entitlement.
Here's the code I'm using with systemPath = "/Library/Audio/Plug-Ins/VST3/"
static func symlinkToBundle(fileName: String, fileExt: String, from systemPath: String) throws {
guard let bundlePath = Bundle.main.resourcePath?.appending("/\(fileName).\(fileExt)") else {
print("File not in bundle")
}
let fileManager = FileManager.default
do {
try fileManager.createSymbolicLink(atPath: systemPath, withDestinationPath: bundlePath)
} catch {
print(error.localizedDescription)
}
}
So the question is ... Is there a way to reliably place this symlink in /Library/... from a sandboxed app using the temporary exception entitlements? I understand there will probably be issues with App Review but for now I am just trying to explore my options.
Thanks.
Hello,
my app works as Auv3 plugin.
I am interested in copying / pasting LogicPro chord track.
After I copy chord track in LogicPro and read UIPasteBoard.general in the app, I can see:
["LogicPasteBoardMarker": <OS_dispatch_data: data[0x3024599c0] = { leaf, size = 1, buf = 0x10a758000 }>]
How can I access these data? Thank you.
I connect two AVAudioNodes by using
- (void)connectMIDI:(AVAudioNode *)sourceNode to:(AVAudioNode *)destinationNode format:(AVAudioFormat * __nullable)format eventListBlock:(AUMIDIEventListBlock __nullable)tapBlock
and add a AUMIDIEventListBlock tap block to it to capture the MIDI events.
Both AUAudioUnits of the AVAudioNodes involved in this connection are set to use MIDI 1.0 UMP events:
[[avAudioUnit AUAudioUnit] setHostMIDIProtocol:(kMIDIProtocol_1_0)];
But all the MIDI voice channel events received are automatically converted to UMP MIDI 2.0 format. Is there something else I need to set so that the tap receives MIDI 1.0 UMPs?
(Note: My app can handle MIDI 2.0, so it is not really a problem. So this question is mainly to find out if I forgot to set the protocol somewhere...).
Thanks!!
Hello,
We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow.
We set the AudioSession as followed:
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_];
We are creating our Audio unit as followed:
AudioComponentDescription desc_;
desc_.componentType = kAudioUnitType_Output;
desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc_.componentManufacturer = kAudioUnitManufacturer_Apple;
desc_.componentFlags = 0;
desc_.componentFlagsMask = 0;
AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_);
IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO.");
UInt32 one_ = 1;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO");
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO");
IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate);
bool isInterleaved = _channel == 2 ? true : false;
self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved);
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO");
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO");
UInt32 maxFramesPerSlice_ = 4096;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO");
UInt32 propSize_ = sizeof(UInt32);
IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO");
AURenderCallbackStruct renderCallbackStruct_;
renderCallbackStruct_.inputProc = playbackCallback;
renderCallbackStruct_.inputProcRefCon = (__bridge void *)self;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO");
AURenderCallbackStruct inputCallbackStruct_;
inputCallbackStruct_.inputProc = recordingCallback;
inputCallbackStruct_.inputProcRefCon = (__bridge void *)self;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO");
And as soon as we try to start the AudioUnit we have the following error:
PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6}
We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere.
We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording
We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow.
It used to work on visonOS 1.0.1
Regards,
Summit-tech
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script.
sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod
So with the OS update the command fails, if a computer has SIP enabled (what is the default).
sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod
Password:
Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted
It would be super nice if either the change can be:
reverted OR
I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups.
When the Standalone App or Logic Pro is in the foreground and active all is well and as expected.
However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore.
The processing thread itself stays on a p-core. But has to wait for the other threads to finish.
How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
I have some visualisation-heavy AUv3's, and the goal is not to perform graphics-intensive tasks if the plugin window is not opened inside the host app (such as Logic Pro).
On iOS, this is easily accomplished by the viewWillAppear, etc overrides. But on macOS, it seems these overrides are not called every time the user opens / closes the plugin windows in the host application.
I did try some alternate methods, like trying to traverse the view / controller hierarchy, or make use of the window property, to no avail.
What substitute mechanism could I use to determine visibility status of an AUv3 on macOS?
Thanks in advance,
Zoltan
I'm experimenting with getting my AUv3 plugins working correctly on iOS and MacOS using Catalyst.
I'm having trouble getting the plugin windows to look right in Logic Pro X on MacOS.
My plugin is designed to look right in Garageband's minimal 'letterbox' layout (1024x335, or ~1:3 aspect ratio)
I have implemented supportedViewConfigurations to help the host choose the best display dimensions
On iOS this works, although Logic Pro iPad doesn't seem to call supportedViewConfigurations at all, only Garageband does.
On MacOS, Logic Pro does call supportedViewConfigurations but only provides oversized screen sizes, making the plugins look awkward.
I can also remove the supportedViewConfigurations method on MacOS, but this introduces other issues:
I guess my question boils down to this: how do I tell Logic Pro X on MacOS what the optimal window size of my plugin is, using Mac Catalyst?
Hi,
I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin.
I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state.
With some simplified parameters it looks something like this:
var gain: Double = 0.0
var frequency: Double = 440.0
private var currentState: [String: Any] = [:]
override var fullState: [String: Any]? {
get {
// Save the current state
currentState["gain"] = gain
currentState["frequency"] = frequency
// Return the preset state
return ["myPresetKey": currentState]
}
set {
// Extract the preset state
currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]
// Set the Audio Unit's properties
gain = currentState["gain"] as? Double ?? 0.0
frequency = currentState["frequency"] as? Double ?? 440.0
}
}
This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM.
I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?