Analyzing Audio to Classify Sounds With Core ML

Hi, was following along with this documentation (https://developer.apple.com/documentation/soundanalysis/analyzing_audio_to_classify_sounds) trying classify sounds within my SwiftUI app. Here's what I have:

Code Block swift
let noiseDetector = NoiseDetector()
let model: MLModel = noiseDetector.model
let analysisQueue = DispatchQueue(label: "com.apple.AnalysisQueue")
public var noiseType: String = "default"


Code Block swift
class ResultsObserver : NSObject, SNResultsObserving, ObservableObject {
    func request(_ request: SNRequest, didProduce result: SNResult) {
        guard let result = result as? SNClassificationResult,
            let classification = result.classifications.first else { return }
        noiseType = classification.identifier
        let formattedTime = String(format: "%.2f", result.timeRange.start.seconds)
        print("Analysis result for audio at time: \(formattedTime)")
        let confidence = classification.confidence * 100.0
        let percent = String(format: "%.2f%%", confidence)
        print("\(classification.identifier): \(percent) confidence.\n")
    }
    func request(_ request: SNRequest, didFailWithError error: Error) {
        print("The the analysis failed: \(error.localizedDescription)")
    }
    func requestDidComplete(_ request: SNRequest) {
        print("The request completed successfully!")
    }
}


Code Block swift
func startAudioEngine() {
    let audioEngine: AVAudioEngine = AVAudioEngine()
    let inputBus = AVAudioNodeBus(0)
    let inputFormat = audioEngine.inputNode.inputFormat(forBus: inputBus)
    do {
        try audioEngine.start()
    } catch {
        print("Unable to start AVAudioEngine: \(error.localizedDescription)")
    }
    let streamAnalyzer = SNAudioStreamAnalyzer(format: inputFormat)
    let resultsObserver = ResultsObserver()
    do {
        let request = try SNClassifySoundRequest(mlModel: model)
        try streamAnalyzer.add(request, withObserver: resultsObserver)
    } catch {
        print("Unable to prepare request: \(error.localizedDescription)")
        return
    }
    let analysisQueue = DispatchQueue(label: "com.apple.AnalysisQueue")
    audioEngine.inputNode.installTap(onBus: inputBus,
                                     bufferSize: 8192,
                                     format: inputFormat) { buffer, time in
        analysisQueue.async {
            streamAnalyzer.analyze(buffer, atAudioFramePosition: time.sampleTime)
        }
    }
}

Now obviously I wouldn't be asking this if it was working, I'm just not sure how its broken, I'm sure its because I've read the documentation wrong but I'm not sure how else to interpret it, secondly I tried injecting some print statements into the startAudioEngine function and from what I could tell it was never actually getting to analyzing the stream at line 32 although I'm not entirely sure what's causing that.

All I want to do is just display the classification as text in the UI in case that's helpful.

Thanks for the help as I'm lost here.





Analyzing Audio to Classify Sounds With Core ML
 
 
Q