Hi,
I'd like to develop an iOS application that keeps the mic open for voice recording and processing even when the screen is off. I want to perform speech-to-text requests whenever samples of voice are detected (using a voice activity detection library) and also send requests to the cloud based on what is spoken.
I've enabled the Audio background mode and preliminary testing seems to indicate that this is working. That is, I can press "record" in my app, switch to another app then shut the screen off, and speak for several seconds before auto-stopping the recording and sending it to a SFSpeechRecognizer task, which appears to succeed.
However, I have read that this should not be supported so before going further down this path, I wanted to understand what exactly are the processing limitations in this mode? The documentation doesn't seem very clear to me.
Thanks,
-- B.