AVAudioEngine returning silence at the beginning of manual rendering

I am using AVAudioEngine to analyse some audio tracks and quite often the `while engine.manualRenderingSampleTime < sourceFile.length {` loop takes a moment to start receiving audio data. Looking at the soundwave looks like the input is simply delayed. This wouldn't be a problem if the length of the buffer would vary depending on this latency but the length unfortunately stays always the same, loosing the final part of the track.

I took the code from the official tutorial and this seems to happen regardless if I add an EQ effect or not. Actually looks like the two analysis (with or without EQ) done one after the other return the same anomaly.



    let format: AVAudioFormat = sourceFile.processingFormat
   
    let engine = AVAudioEngine()
    let player = AVAudioPlayerNode()
    engine.attach(player)
   
    if compress {
        let eq = AVAudioUnitEQ(numberOfBands: 2)
        engine.attach(eq)
        let lowPass = eq.bands[0]
        lowPass.filterType = .lowPass
        lowPass.frequency = 150.0
        lowPass.bypass = false
       
        let highPass = eq.bands[1]
        highPass.filterType = .highPass
        highPass.frequency = 100.0
        highPass.bypass = false
       
        engine.connect(player, to: eq, format: format)
        engine.connect(eq, to: engine.mainMixerNode, format: format)
    }else{
        engine.connect(player, to: engine.mainMixerNode, format: format)
    }
   
    do {
        let maxNumberOfFrames: AVAudioFrameCount = 4096
        try engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: maxNumberOfFrames)
    } catch {
        fatalError("Could not enable manual rendering mode, \(error)")
    }
   
    player.scheduleFile(sourceFile, at: nil)
   
    do {
        try engine.start()
        player.play()
    }catch{
        fatalError("Could not start engine, \(error)")
    }
   
    // buffer to which the engine will render the processed data
    let buffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat, frameCapacity: engine.manualRenderingMaximumFrameCount)!
    //
    var pi = 0
    //
    while engine.manualRenderingSampleTime < sourceFile.length {
        do {
            let framesToRender = min(buffer.frameCapacity, AVAudioFrameCount(sourceFile.length - engine.manualRenderingSampleTime))
            let status = try engine.renderOffline(framesToRender, to: buffer)
            switch status {
            case .success:
                // data rendered successfully
                let flength = Int(buffer.frameLength)
                points.reserveCapacity(pi + flength)
               
                if let chans = buffer.floatChannelData?.pointee {
                    let left = chans.advanced(by: 0)
                    let right = chans.advanced(by: 1)
                    for b in 0..<flength {
                        let v:Float = max(abs(left[b]), abs(right[b]))
                        points.append(v)
                    }
                }
                pi += flength
               
            case .insufficientDataFromInputNode:
                // applicable only if using the input node as one of the sources
                break
               
            case .cannotDoInCurrentContext:
                // engine could not render in the current render call, retry in next iteration
                break
               
            case .error:
                // error occurred while rendering
                fatalError("render failed")
            }
        } catch {
            fatalError("render failed, \(error)")
        }
    }
   
    player.stop()
    engine.stop()


I thought it was perhaps a simulator issue, but also on the device is happening. Am I doing anything wrong? Thanks!

Replies

Did you solve this. I have the same issue, and the number of silenced frames is always a multiple of the manualRenderingMaximumFrameCount. It looks like the player node might not be ready as soon as the rendering starts, and nothing seems to fix this (not even calling prepare() in the player node with the full file length).

A little late... but for anyone who comes across this issue in the future, I found a solution:

Do not use AVAudioPlayerNode with offline manual rendering. I believe this may be the source of the issue. Instead use "setManualRenderingInputPCMFormat(_:inputBlock:)" with engine.inputNode. The inputBlock will tell you how many frames it wants, and you must return an AudioBufferList. You can obtain an AudioBufferList from an AVAudioPCMBuffer (yourBuffer.audioBufferList).

This is more of a solution for player.scheduleBuffer not player.scheduleFile. If you are currently using player.scheduleFile then you will need to read the file one buffer at a time and feed to the inputBlock.