We have developed a website that utilizes WebRTC's getUserMedia() functionality. This appeared to work well on our iOS 12 and iOS 13 devices; however, after we record the microphone stream and send it to our server via ajax, we then play an audio file using an <audio> element. The audio file plays, but it's only playing through the internal phone speaker whenever we have called getUserMedia() to interact with the microphone. This is unexpected behavior because a user who is looking at the webpage cannot both see the screen and hear the audio when it is not coming out of the loudspeaker.
We have tried a number of things to remedy this issue, but we have yet to find a working solution to play audio through the loudspeaker when using getUserMedia() to access the mic stream. We also tried using a <video> element instead of the <audio> element, but no matter what we do, if we've requested access to the microphone via calling getUserMedia(), the audio always plays through the internal speaker.
We pulled up the source and output selection WebRTC example on our iDevices located here: webrtc.github.io/samples/src/content/devices/input-output/
But the audio output destinations <select> box is not populated by any audio output destinations. Presumably we would be able to choose the LoudSpeaker from the list of MediaDevices, but this is currently not possible given that the output destinations are not exposed by WebRTC in iOS Safari to be chosen. Additionally, we cannot get the audio to play inside of AirPods when using getUserMedia() to access the microphone.
How can we get the loudspeaker to be the output for audio played back via <audio> elements?
This GitHub bug appears to be the same or a related issue: https://github.com/webrtc/samples/issues/1186