ARKit stops providing Audio

Hello all, this has been puzzling me for a good few hours now.

My config is a SwiftUI view from where I am calling a UIViewRepresentable that makes up my ARView. The representable is passed a delegate and an ARView instance

RealityKitView(arView: $arView, sceneDelegate: $delegate)

I have set up a delegate for my ARView like so:

class MySessionDelegate: NSObject, ARSessionDelegate, ARSessionObserver {
    var bufferArray: [CMSampleBuffer] = []

    func session(
        _ session: ARSession,
        didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer
    ){
        bufferArray.append(audioSampleBuffer)
    }
}

and of course I set audio samples to be provided with

config.providesAudioData = true in the makeUIView function.

But for some reason I don't get more than 125 samples. I set up a counter in the session function to see how many samples I get and it always just stops at 124..I have no idea why this is happening. I also tested on different devices (iPad Pro from 2021 and iPhone 11) and it always stops at 124 samples and then just doesn't do anything anymore...what the hell? :D

Do I need to do anything special when linking the delegate to my arview? I am just doing session.delegate = delegate in the makeUIView function

Any and all help would be greatly appreciated!

Answered by ForumsContributor in

Sorry, you'll have to post more code pertinent to what is giving trouble and details on what you're using the ARView for.

So I narrowed it down to the fact that I'm appending the audio samples. It seems that maybe I'm holding on to the samples for too long and that is causing this to happen. But I need to write the audio samples to an audio file...how can I circumvent this? I tried implementing an audio file writer with AVFoundation, but even that causes the. audio samples to stop coming after a while (125 samples that I save)...its honestly super weird..

Accepted Answer

I finally fixed this not appending the Buffers to an array and writing the audio file later but rather setting up an audio-writer class that takes samples in real time and writes them to a file. For anybody who is facing the same problem, here is my class (probably very poorly and inefficiently written, so beware xD):

public class audioWriter: ObservableObject{
    var assetWriter: AVAssetWriter
    var audioInput: AVAssetWriterInput
    init(assetWriter: AVAssetWriter, audioInput: AVAssetWriterInput) {
        self.assetWriter = assetWriter
        self.audioInput = audioInput
    }
    
    func startWriting(firstBuffer: CMSampleBuffer){
        audioInput.expectsMediaDataInRealTime = true
        assetWriter.add(audioInput)
        assetWriter.startWriting()
        assetWriter.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(firstBuffer))
    }
    func addSample(sample: CMSampleBuffer){
        audioInput.append(sample)
    }
    func finish(){
        audioInput.markAsFinished()
        assetWriter.finishWriting{}
    }
}

The class is then called like this:

func startAudio(url: URL){
        let audioOutputSettings: [String: Any] = [
            AVFormatIDKey: kAudioFormatLinearPCM,
            AVSampleRateKey: 44100.0,
            AVNumberOfChannelsKey: 1,
            AVLinearPCMBitDepthKey: 16,
            AVLinearPCMIsBigEndianKey: false,
            AVLinearPCMIsFloatKey: false,
            AVLinearPCMIsNonInterleaved: false
        ]
        let assetWriter = try? AVAssetWriter(outputURL: url, fileType: AVFileType.wav)
       let audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: audioOutputSettings)
        recorder = audioWriter(assetWriter: assetWriter!, audioInput: audioInput)
        recorder?.startWriting(firstBuffer: buffer!)
        recording = true
    }

The recording = true is for my session func in the ARSessionObserver I am also using to call the startAudio function:

    func session( _ session: ARSession, didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer){
            buffer = audioSampleBuffer
            if recording{
                print("Audio Sample: \(count)")
                count += 1
                let newBuffer = try? CMSampleBuffer(copying: buffer!)
                recorder?.addSample(sample: newBuffer!)
            }
    }

I am not sure if I need the detour of copying the buffer before writing it to my audio file with let newBuffer = try? CMSampleBuffer(copying: buffer!) , if anybody knows more, please do tell : )

This has worked for me, but if anyone comes up with a more efficient way, feel free to share :D

ARKit stops providing Audio
 
 
Q