a virtual macOS microphone

hello,

how do i create a virtual microphone on macOS that can be selected as a default input device in System Settings or in apps like FaceTime / QuickTime Player / Skype, etc?

is Audio HAL plugin the way to go?

i've seen this macOS 10.15 note: "Legacy Core Audio HAL audio hardware plug-ins are no longer supported. Use Audio Server plug-ins for audio drivers." though i am not sure if that's applicable, as i can think of these interpretations:
  • 1 "Legacy Core Audio HAL audio hardware plug-ins are no longer supported (but you can still use non-legacy ones.)

  • 2 "Legacy Core Audio HAL audio hardware plug-ins are no longer supported." (but you can still use non-hardware ones".)

  • 3 "Legacy Core Audio HAL audio hardware plug-ins are no longer supported". (if you used that functionality to implement audio hardware drivers then your you can use Audio Server plug-ins instead, otherwise you are screwed.)

The "Audio Server plugin" documentation is minimalistic:

https://developer.apple.com/library/archive/qa/qa1811/_index.html

which leads to a 2013 sample code:

https://developer.apple.com/library/archive/samplecode/AudioDriverExamples/Introduction/Intro.html

and contains a "nullAudio" plugin and a kernel extension backed plugin - neither of those i wasn't able to resurrect (i'm on macOS Catalina now).

any hints?

Answered by q-p in 628401022
Ignoring your rant, AudioServerPlugIns are (and remain) supported, but there is next to no guidance on how write (or debug!) them.

You can either bite the bullet and start from scratch, go with NullAudio (or the C++ SimpleAudio (?) variant that uses the CoreAudioHelper classes) — both of which are old but can be brought forward reasonably easily —, or look for a different starting point.

There's different open source projects that have working AudioServerPlugIns, for example SoundFlower, BlackHole, BackgroundMusic, or SoundPusher. A commercial solution (which you may be able to license) is Rogue Amoeba's Loopback.
what can i say...

i've been following apple's audio success story practically from the very beginning. when the thing was called sound driver on a system that was called just that, system, and it was revolutionary to hear 8-bit 22kHz sound compared to what we had on other systems back at the time. then with the sound manager when we finally learned the true meaning of "square" waves (which weren't so much square up until then). i'd probably remember now some intimate details of the double buffer proc, the bеast that surfaced ca IM VI time. later on it was an amazing journey with CoreAudio which raised the bar once again. with audio units, HAL, inter-app audio, aggregate devices, audio engine, and many more things big or small, that i won't mention now to keep this opus short -- things that continued to match and in many cases exceed our expectations. and that was a fun journey, all the way back from the very first chime, through sosumi and wall-e up to airpods (the rant about AirPods latency would be elsewhere).

but boy where we are today. for such a simple thing as "virtual microphone" apparently "there is no supported way to achieve the desired functionality given the currently shipping system configurations" Follow-up: 744010100.

terminally frustrated. guys, you really need more jims reekeses onboard...

Accepted Answer
Ignoring your rant, AudioServerPlugIns are (and remain) supported, but there is next to no guidance on how write (or debug!) them.

You can either bite the bullet and start from scratch, go with NullAudio (or the C++ SimpleAudio (?) variant that uses the CoreAudioHelper classes) — both of which are old but can be brought forward reasonably easily —, or look for a different starting point.

There's different open source projects that have working AudioServerPlugIns, for example SoundFlower, BlackHole, BackgroundMusic, or SoundPusher. A commercial solution (which you may be able to license) is Rogue Amoeba's Loopback.
thank you for your answer and the references to other projects!

i managed to make NullAudio working. the worrying bit is the blanket DTS answer that "there is no supported way" which automatically makes whatever currently works "unsupported", which in turn means it may disappear on a whim, e.g. in macOS XI

Hi, I have a similar kind of requirement. I'm trying to create two different virtual devices one for input and another for output. were you able to figure this out?

Hello, I have to do the same thing and I am not getting any reference to go forward. It would be really helpful if you could tell how you have done it. I was able to create a virtual device and selected it but I don't know how to feed data to it. 

a virtual macOS microphone
 
 
Q