AVAudioEngine and Voice Processing Unit MacOS

Hi,


I am trying to build a very simple test for receiving input audio using AVAudioEngine in which the input node has the voice processing enabled on MacOS. I've seen examples for iOS, which make use of AVAudioSession, which is not available in MacOS. My code is quite simple and shown below. If I don't enable voice processing, I am able to start the engine. However, if I enable voice processing and then start the engine, I get a crash with the error:


AUVPAggregate.cpp:1432:AUVoiceProcessor: couldn't create the aggregate device (err=-10876)


Is there something I am missing? Do I need to perform other configuration on the input node to get this to work?


Thanks so much in advance.


class ViewController: NSViewController {


private var audioEngine = AVAudioEngine()


override func viewDidLoad() {

super.viewDidLoad()

setupAudioEngine()

startaudioEngine()


}



func setupAudioEngine() {

do {

let input = audioEngine.inputNode

do {

try input.setVoiceProcessingEnabled(true)

} catch {

print("could not enable voice processing \(error)")

return

}

}

}