AVAudioConverter buffers are over-retained in a DispatchQueue.concurrentPerform context

I have an audio analysis tool which I’m trying to parallelise, but it uses an escalating amount of heap space when working on a set of audio files.

It turns out that when I use the convert(to:error:withInputFrom:) method of AVAudioConverter inside a concurrentPerform block, a large amount of the memory associated with each conversion is retained until the very end of the concurrent performance, rather than being released at the end of each iteration’s execution. As such, the memory usage climbs and climbs while a batch of files is processed, rather than staying effectively flat during parallel operations.

I’m assuming that this is some flaw in my code rather than a bug in either framework, and hope someone can help me find it. See below a simplified version of the code, which converts the same file an arbitrary number of times (rather than operating on a large batch of different files as is the case in my real application).

Any help appreciated!

let outputFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false)!

DispatchQueue.global(qos: .userInitiated).async {
    DispatchQueue.concurrentPerform(iterations: 1000) { index in

        let inputFile = try! AVAudioFile(forReading: URL(filePath: "/path/to/some/song.m4a"))

        let inputFormat = inputFile.processingFormat
        let inputFrameCount = AVAudioFrameCount(inputFile.length)
        let inputBuffer = AVAudioPCMBuffer(pcmFormat: inputFormat, frameCapacity: inputFrameCount)!

        try! inputFile.read(into: inputBuffer)

        let sampleRateCoefficient = inputFormat.sampleRate / outputFormat.sampleRate
        let outputFrameCount = UInt32(Double(inputFrameCount) / sampleRateCoefficient)
        let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCount)!

        let converter = AVAudioConverter(from: inputBuffer.format, to: outputFormat)!
        let converterInputBlock: AVAudioConverterInputBlock = { _, outStatus in
            outStatus.pointee = AVAudioConverterInputStatus.haveData
            return inputBuffer
        }

        // If we just do some arbitrary work here, even big mallocs, memory usage is flat as everything is released at the end of the concurrentPerform closure.
        // But if we run this conversion, each iteration retains a big chunk of memory until they're all finished. :(

        converter.convert(
            to: outputBuffer,
            error: nil,
            withInputFrom: converterInputBlock
        )
    }
}

Looking closer at the debug memory graph it seemed that the autoreleasepool holding the audio buffer was too long-lived. I've not worked closely with these pools before but I was able to solve the memory issue by closely wrapping the conversion call in its own pool:

_ = autoreleasepool {
    converter.convert(
        to: outputBuffer,
        error: nil,
        withInputFrom: converterInputBlock
    )
}

This appears to correctly scope the lifetime of the buffers. If anyone has a good explanation for this I'd love to hear it!

If anyone has a good explanation for this I'd love to hear it!

Check out Objective-C Memory Management for Swift Programmers.

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

AVAudioConverter buffers are over-retained in a DispatchQueue.concurrentPerform context
 
 
Q