We’re working on a VoIP app that can also just spontaneously play audio in background for a walkie-talkie-like functionality. We’re using the AVAudioEngine for recording and playback with voice processing enabled for echo cancellation. However, we’ve been struggling a lot with disabling the microphone for an AVAudioEngine (avoiding the system to show the red recording indicator) not to indicate recording when our app isn’t and also it seems like we cannot start an engine that is recording in background. However, it seemed to us like we cannot disable an engine’s input node / microphone for that purpose after it has been used once. We therefore tried having two audio engines, one for recording and one for playback. However, it seemed like this would almost completely silence our audio output, we assume that’s because voice processing only works correctly when input and output are being routed via the same engine. We therefore tried a third approach: Having one engine for recording and one for both recording and playback. This seems to work now; we can just disable the engine for both recording and playback whenever we’re not recording, but the trade-off seems to be a bit of lag during playback when we’re switching the audio engine used for playback.
It would be a great if we could just have a single engine and disable its input node temporarily. Setting the isVoiceProcessingInputMuted property on the input node to true
doesn't have the desired effect of removing the system's recording and privacy indicators (and therefore I assume enabling the engine to start in background).
We have discussed this issue during a WWDC lab session where the engineers thought our current workaround would probably be the best we can currently do, so consider this as a feature request.