On iOS/iPadOS, with a USB microphone connected, the 'availableInputs' property of AVAudioSession's sharedInstance returns AVAudioSessionPortDescriptions for both the built-in microphone and the USB microphone. On macOS, in an iPad app built for macOS using Catalyst, the only AVAudioSessionPortDescription returned is for a device named 'CADefaultDeviceAggregate-xxxx-x' (where different combinations of numbers occupy the characters designated by 'x'), even when several USB audio devices are connected to the Mac.
We would like for our app to be able to switch between the built-in microphone and external USB microphones (or other USB audio devices) in the Mac version, just as it does in the iPad version. I understand that the user has multiple options for selecting the current audio input/output devices used by the system (System Preferences: Sound and Audio MIDI Setup). However, in order for the Mac version of our iPad app to work, we at least need to get the correct device name and UID for the selected input device, rather than CADefaultDeviceAggregate.
For a specific example, we would like to see the following when a miniDSP UMIK-1 is attached to the Mac (this is some of the info provided in the AVAudioSessionPortDescription for this device on iOS/iPadOS):
Port Name: Umik-1 Gain: 18dB
Type: USBAudio
UID: AppleUSBAudioEngine:miniDSP :Umik-1 Gain: 18dB :00002:1
Does anyone know of a solution or workaround to this problem (it's currently a show-stopper for a Catalyst version of our app)?
It turns out that a lot more is possible with the CoreAudio framework in Catalyst than in iOS. So, we can get all the info we need through CoreAudio in the Mac version of the app. We can even allow the user to choose the selected audio input and output devices through our own app.
Here's a simple Swift example to get the name and UID of the current input device (accessible from our existing Objective-C code):
@objc open class func getSelectedInputDeviceInfo() -> [String: Any]
{
let keys = FSTCAKey()
var devInfo:[String: Any] = [keys.inputDeviceName: "Audio Input", keys.inputDeviceUID: "Audio Input"]
var deviceId = AudioDeviceID(0);
var deviceSize = UInt32(MemoryLayout.size(ofValue: deviceId));
var address = AudioObjectPropertyAddress(mSelector: kAudioHardwarePropertyDefaultInputDevice, mScope: kAudioObjectPropertyScopeGlobal, mElement: kAudioObjectPropertyElementMaster);
var err = AudioObjectGetPropertyData(AudioObjectID(kAudioObjectSystemObject), &address, 0, nil, &deviceSize, &deviceId);
if ( err == 0)
{
devInfo[keys.inputDeviceID] = deviceId
address.mSelector = kAudioDevicePropertyDeviceNameCFString
var deviceName = "" as CFString
deviceSize = UInt32(MemoryLayout.size(ofValue: deviceName))
err = AudioObjectGetPropertyData( deviceId, &address, 0, nil, &deviceSize, &deviceName)
if (err == 0)
{
devInfo[keys.inputDeviceName] = deviceName as String
print("### current input device: \(deviceName) ")
}
address.mSelector = kAudioDevicePropertyDeviceUID
err = AudioObjectGetPropertyData( deviceId, &address, 0, nil, &deviceSize, &deviceName)
if (err == 0)
{
devInfo[keys.inputDeviceUID] = deviceName as String
print("### current input device UID: \(deviceName) ")
}
}
return devInfo
}
The name and UID returned from CoreAudio give us what we were hoping to get from AVAudioSession.