Audio crashes when connected to AirPods

Hi There,

Whenever I want to use the microphone for my ShazamKit app while connected to AirPods my app crashes with a "Invalid input sample rate." message.

I've tried multiple formats but keep getting this crash. Any pointers would be really helpful.

func configureAudioEngine() {        
        do {
            try audioSession.setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker, .allowAirPlay, .allowBluetoothA2DP ,.allowBluetooth])
            try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
        } catch {
            print(error.localizedDescription)
        }

        guard let engine = audioEngine else { return }

        let inputNode = engine.inputNode
        let inputNodeFormat = inputNode.inputFormat(forBus: 0)

        let audioFormat = AVAudioFormat(
            standardFormatWithSampleRate: inputNodeFormat.sampleRate,
            channels: 1
        )
        
        // Install a "tap" in the audio engine's input so that we can send buffers from the microphone to the signature generator.
        engine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) {  buffer, audioTime in
            self.addAudio(buffer: buffer, audioTime: audioTime)
        }
    }
    ```
Answered by Frameworks Engineer in 716852022

Following up with the AVFAudio team on this, the recommended approach is to convert the buffer inside the installTap function into a buffer with a AVAudioFormat that's accepted by ShazamKit. Here's a sample code how that conversion could look like.

Configure AVAudioEngine

You need to configure the audio engine to use the input node of the HW and install a tap on that. Inside the installTap function, convert the buffer into a buffer with the desired audio format.

func configureAudioEngine() {

    guard let outputAudioFormat = AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 1) else { return }

    let inputNode = audioEngine.inputNode
    let inputFormat = inputNode.inputFormat(forBus: 0)

    inputNode.installTap(onBus: 0, bufferSize: 2048, format: inputFormat) { [weak self] buffer, audioTime in
         guard let self = self else { return }

         // convert the buffer into the desired format
         if let outputBuffer = self.convert(inputBuffer: buffer, outputFormat: outputAudioFormat) {

             // use the converted buffer, AVAudioTime needs to be mapped in order to use it
             self.session.matchStreamingBuffer(outputBuffer, at: nil)
         }
    }
}

Convert AVAudioPCMBuffer

Convert the AVAudioPCMBuffer using the desired AVAudioFormat and return the new buffer.

func convert(inputBuffer: AVAudioPCMBuffer, outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? {

    let outputFrameCapacity = AVAudioFrameCount(
        round(Double(inputBuffer.frameLength) * (outputFormat.sampleRate / inputBuffer.format.sampleRate))
    )

    guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity) else { return nil }
    guard let converter = AVAudioConverter(from: inputBuffer.format, to: outputFormat) else { return nil }

    converter.convert(to: outputBuffer, error: nil) { packetCount, status in
        status.pointee = .haveData
        return inputBuffer
    }

    return outputBuffer
}

Final Steps

Set the AVAudioSession category to the desired options and check for microphone permission, then you can start the AVAudioEngine. Finally listen for results on SHSession.

try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker, .allowAirPlay, .allowBluetoothA2DP])

AVAudioSession.sharedInstance().requestRecordPermission { [weak self] success in
    guard success, let self = self else { return }
    Task.detached { try? self.audioEngine.start() }
}

Task { @MainActor in

    for await case .match(let match) in session.results {
        self.matchResult = match.matchResult
    }
}
Accepted Answer

Following up with the AVFAudio team on this, the recommended approach is to convert the buffer inside the installTap function into a buffer with a AVAudioFormat that's accepted by ShazamKit. Here's a sample code how that conversion could look like.

Configure AVAudioEngine

You need to configure the audio engine to use the input node of the HW and install a tap on that. Inside the installTap function, convert the buffer into a buffer with the desired audio format.

func configureAudioEngine() {

    guard let outputAudioFormat = AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 1) else { return }

    let inputNode = audioEngine.inputNode
    let inputFormat = inputNode.inputFormat(forBus: 0)

    inputNode.installTap(onBus: 0, bufferSize: 2048, format: inputFormat) { [weak self] buffer, audioTime in
         guard let self = self else { return }

         // convert the buffer into the desired format
         if let outputBuffer = self.convert(inputBuffer: buffer, outputFormat: outputAudioFormat) {

             // use the converted buffer, AVAudioTime needs to be mapped in order to use it
             self.session.matchStreamingBuffer(outputBuffer, at: nil)
         }
    }
}

Convert AVAudioPCMBuffer

Convert the AVAudioPCMBuffer using the desired AVAudioFormat and return the new buffer.

func convert(inputBuffer: AVAudioPCMBuffer, outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? {

    let outputFrameCapacity = AVAudioFrameCount(
        round(Double(inputBuffer.frameLength) * (outputFormat.sampleRate / inputBuffer.format.sampleRate))
    )

    guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity) else { return nil }
    guard let converter = AVAudioConverter(from: inputBuffer.format, to: outputFormat) else { return nil }

    converter.convert(to: outputBuffer, error: nil) { packetCount, status in
        status.pointee = .haveData
        return inputBuffer
    }

    return outputBuffer
}

Final Steps

Set the AVAudioSession category to the desired options and check for microphone permission, then you can start the AVAudioEngine. Finally listen for results on SHSession.

try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker, .allowAirPlay, .allowBluetoothA2DP])

AVAudioSession.sharedInstance().requestRecordPermission { [weak self] success in
    guard success, let self = self else { return }
    Task.detached { try? self.audioEngine.start() }
}

Task { @MainActor in

    for await case .match(let match) in session.results {
        self.matchResult = match.matchResult
    }
}

It seem to have removed the ShazamKit crashes but I am getting this error now:

Audio crashes when connected to AirPods
 
 
Q