In the AVAudioPlayerNode documentation, - https://developer.apple.com/documentation/avfaudio/avaudioplayernode it says:
[...]when playing file segments, the node will make sample rate conversions if necessary, but it's often preferable to configure the node’s output sample rate to match that of the files and use a mixer to perform the rate conversion.
When playing buffers, there's an implicit assumption that the buffers are at the same sample rate as the node’s output format.
I want to understand why it's often preferable. It's because the AVAudioMixerNode can "sum" multiple signals with same sample rate and convert it all at once (like a single signal), so it's "lighter" than resampling multiple signals inside each AVAudioPlayerNodes?
Post
Replies
Boosts
Views
Activity
Recently, the Voice Memos app from Apple got a new feature: a magic wand that performs noise reduction.
This noise reduction seems to process live while the recorded audio is playing, since it doesn't pause the played audio.
In the Apple documentation, there's a single reference - https://developer.apple.com/documentation/accelerate/signal_extraction_from_noise to a noise reduction, by performing a discrete cossine transform - https://en.wikipedia.org/wiki/Discrete_cosine_transform, removing the unwanted frequencies below a threshold, and then performing the inverse transform.
My question is: is it a viable approach for performing live processing? If yes, how can I perform it? By calling installTap or maybe creating a custom AudioUnit?