processingFormat vs fileFormat, comparable

What is the difference between processingFormat and fileFormat?


When comparing two processingFormats or AVAudioFormats what is considered equal.


Can a processingFormat be the same format, but different number of channels?


How does an AVAudioFile's processingFormat relate to a AVAudioBuffer's format property?

/! @property fileFormat

    @abstract The on-disk format of the file.
    */
    open var fileFormat: AVAudioFormat { get }

    /! @property processingFormat
    @abstract The processing format of the file.
    */
    open var processingFormat: AVAudioFormat { get }


In summary:


AVAudioBuffer.format

AVAudioFile.processingFormat

AVAudioFile.fileFormat


What is the difference? And do they have to match in Format, Sample Rate, Interleaved and Channel Count to be considered == ?



I thought I understood the way the formats interacted with connecting nodes. I am checking the processingFormat before connecting a node and reconnecting if there is a discrepancy between the formats. I am also using an AVAudioMixerNode in order to automatically convert formats and sample rates...


I am trying to debug a crash in my app.

required condition is false: _outputFormat.channelCount == buffer.format.channelCount

Replies

According to the comments in the AVAudioFile.h header file:


AVAudioFile represents an audio file opened for reading or writing.

Regardless of the file's actual format, reading and writing the file is done via

`AVAudioPCMBuffer` objects, containing samples in an `AVAudioCommonFormat`,

referred to as the file's "processing format." Conversions are performed to and from

the file's actual format.


That suggests the file format is fixed by the actual content of the file. The processing format is some flavor of PCM, with a restricted choice of sample formats. The fact that reading from the file does a conversions suggests that the file and processing formats don't have to match in their format details.


I don't know, but it's perhaps plausible that the channel counts don't have to match, so that you could (for example) read mono data from a file, and get it converted automatically to stereo in your sample buffers.

>trying to debug a crash in my app.

Do you have a line related to node connection(s) similar to:


audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: AVAudioFormat.init(standardFormatWithSampleRate: 96000, channels: 1))



...curious how many -channels- you've specified.

Seems like you are new to digital audio processing. The primary difference in formats is how the internal audio file buffer values of the files are represented. AVAudioFormat.fileformat will have the internal buffer represented by Integers (usually 16) but AVAudioFormat.processingformat will represent the same with Float32.


Use processing format for scheduling any buffers for playback.