Generating microphone live predictions in SwiftUI

Hello everyone,

I'm pretty new to SwiftUI (and to Swift in general), and I'd like to create an app able to recognise when my water tap is open (it's a part of my app helping me to manage the water level of my plants).

In order to do it, I've created an ML Model (which works pretty well according to my previews in Create ML).

Right now, I'm looking for the integration in my SwiftUI app : I imported the ML Model, but I don't know how to generate live predictions according to the current microphone input.

When I try to make predictions (soundClassifier.prediction), I don't know what to put in the audioSamples argument, it's a ML Multiarray, but how to create it in live with my microphone input ?

As you see, I've plenty of questions about how Core ML SoundAnalysis works with SwiftUI, if any of view could enlighten me, I'd be very grateful to them...

Thank you by advance.

Théo L.
Generating microphone live predictions in SwiftUI
 
 
Q