Mac Catalyst Audio Device Name CADefaultDeviceAggregate-xxxx-x

On iOS/iPadOS, with a USB microphone connected, the 'availableInputs' property of AVAudioSession's sharedInstance returns AVAudioSessionPortDescriptions for both the built-in microphone and the USB microphone. On macOS, in an iPad app built for macOS using Catalyst, the only AVAudioSessionPortDescription returned is for a device named 'CADefaultDeviceAggregate-xxxx-x' (where different combinations of numbers occupy the characters designated by 'x'), even when several USB audio devices are connected to the Mac.


We would like for our app to be able to switch between the built-in microphone and external USB microphones (or other USB audio devices) in the Mac version, just as it does in the iPad version. I understand that the user has multiple options for selecting the current audio input/output devices used by the system (System Preferences: Sound and Audio MIDI Setup). However, in order for the Mac version of our iPad app to work, we at least need to get the correct device name and UID for the selected input device, rather than CADefaultDeviceAggregate.


For a specific example, we would like to see the following when a miniDSP UMIK-1 is attached to the Mac (this is some of the info provided in the AVAudioSessionPortDescription for this device on iOS/iPadOS):

Port Name: Umik-1 Gain: 18dB

Type: USBAudio

UID: AppleUSBAudioEngine:miniDSP :Umik-1 Gain: 18dB :00002:1


Does anyone know of a solution or workaround to this problem (it's currently a show-stopper for a Catalyst version of our app)?

Answered by precisely in 389263022

It turns out that a lot more is possible with the CoreAudio framework in Catalyst than in iOS. So, we can get all the info we need through CoreAudio in the Mac version of the app. We can even allow the user to choose the selected audio input and output devices through our own app.


Here's a simple Swift example to get the name and UID of the current input device (accessible from our existing Objective-C code):


@objc open class func getSelectedInputDeviceInfo() -> [String: Any]
  {
  let keys = FSTCAKey()
  var devInfo:[String: Any] = [keys.inputDeviceName: "Audio Input", keys.inputDeviceUID: "Audio Input"]

  var deviceId = AudioDeviceID(0);
  var deviceSize = UInt32(MemoryLayout.size(ofValue: deviceId));
  var address = AudioObjectPropertyAddress(mSelector: kAudioHardwarePropertyDefaultInputDevice, mScope: kAudioObjectPropertyScopeGlobal, mElement: kAudioObjectPropertyElementMaster);
  var err = AudioObjectGetPropertyData(AudioObjectID(kAudioObjectSystemObject), &address, 0, nil, &deviceSize, &deviceId);

  if ( err == 0)
  {
  devInfo[keys.inputDeviceID] = deviceId

  address.mSelector = kAudioDevicePropertyDeviceNameCFString
  var deviceName = "" as CFString
  deviceSize = UInt32(MemoryLayout.size(ofValue: deviceName))
  err = AudioObjectGetPropertyData( deviceId, &address, 0, nil, &deviceSize, &deviceName)
  if (err == 0)
  {
  devInfo[keys.inputDeviceName] = deviceName as String
  print("### current input device: \(deviceName) ")
  }
  address.mSelector = kAudioDevicePropertyDeviceUID
  err = AudioObjectGetPropertyData( deviceId, &address, 0, nil, &deviceSize, &deviceName)
  if (err == 0)
  {
  devInfo[keys.inputDeviceUID] = deviceName as String
  print("### current input device UID: \(deviceName) ")
  }
  }

  return devInfo
  }

The name and UID returned from CoreAudio give us what we were hoping to get from AVAudioSession.

In a way, it seems to be an architecture mismatch between macOS and iOS audio. On iOS, there can be multiple audio routes (that is, possible paths from input to output ports). On macOS, there is only one audio route, from the user's selected input device to the user's selected output device. There's no system reason to switch port on macOS, unlike iOS, where the system switches between ports a lot.


In a way, that's fine on the macOS side as far as it goes — it makes sure the defaults that are actually used are the defaults that the user chose. However, it would be helpful if individual inputs and output devices were also available as input and output ports in Catalyst. There's a similar problem with AVAudioEngine's input and output nodes under Catalyst.


I think this mainly represents a lack of foresight (or perhaps time ran out before these niceties could be added). File a bug report. It does no harm to nudge the audio engineers in the right direction. 🙂

Accepted Answer

It turns out that a lot more is possible with the CoreAudio framework in Catalyst than in iOS. So, we can get all the info we need through CoreAudio in the Mac version of the app. We can even allow the user to choose the selected audio input and output devices through our own app.


Here's a simple Swift example to get the name and UID of the current input device (accessible from our existing Objective-C code):


@objc open class func getSelectedInputDeviceInfo() -> [String: Any]
  {
  let keys = FSTCAKey()
  var devInfo:[String: Any] = [keys.inputDeviceName: "Audio Input", keys.inputDeviceUID: "Audio Input"]

  var deviceId = AudioDeviceID(0);
  var deviceSize = UInt32(MemoryLayout.size(ofValue: deviceId));
  var address = AudioObjectPropertyAddress(mSelector: kAudioHardwarePropertyDefaultInputDevice, mScope: kAudioObjectPropertyScopeGlobal, mElement: kAudioObjectPropertyElementMaster);
  var err = AudioObjectGetPropertyData(AudioObjectID(kAudioObjectSystemObject), &address, 0, nil, &deviceSize, &deviceId);

  if ( err == 0)
  {
  devInfo[keys.inputDeviceID] = deviceId

  address.mSelector = kAudioDevicePropertyDeviceNameCFString
  var deviceName = "" as CFString
  deviceSize = UInt32(MemoryLayout.size(ofValue: deviceName))
  err = AudioObjectGetPropertyData( deviceId, &address, 0, nil, &deviceSize, &deviceName)
  if (err == 0)
  {
  devInfo[keys.inputDeviceName] = deviceName as String
  print("### current input device: \(deviceName) ")
  }
  address.mSelector = kAudioDevicePropertyDeviceUID
  err = AudioObjectGetPropertyData( deviceId, &address, 0, nil, &deviceSize, &deviceName)
  if (err == 0)
  {
  devInfo[keys.inputDeviceUID] = deviceName as String
  print("### current input device UID: \(deviceName) ")
  }
  }

  return devInfo
  }

The name and UID returned from CoreAudio give us what we were hoping to get from AVAudioSession.

On the Mac, it is possible for an app to choose one or more audio input and/or output devices to work with, independent of the system input/output device selection. However, I don't see any reason why a Mac Catalyst audio app couldn't or shouldn't operate the same way on Mac as it does on iPad (even if it isn't as elegant as a native Mac app's ability to work with audio devices, independent of the system selection).


In any case, I did submit a formal request to Apple (FB7328546) to provide similar behavior in AVAudioSession's 'availableInputs' property.


Also, although I have been using the RemoteIO audio unit to interact with the selected audio devices, I did test this issue with AVAudioEngine and got the same result for the input device name ('CADefaultDeviceAggregate-xxxx-x').

So, basically, we are back to having two different code branches - one for selecting input device on MacOS and another for iOS?

I was really hoping I can write the same code for both systems...

Do you have to check at run time what system the app is running on?

Or at compile time using pecompile directives?

Mac Catalyst Audio Device Name CADefaultDeviceAggregate-xxxx-x
 
 
Q