I have an iPhone 11 Pro Max running iOS14.2 and find that getUserMedia does not work in all scenarios. I set up my video feed as follows:
	if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
		throw new Error(
			'Browser API navigator.mediaDevices.getUserMedia not available');
	}
	const video = document.getElementById('video');
	const stream = await navigator.mediaDevices.getUserMedia({
		'audio': false,
		'video': {
			facingMode: 'environment',
			// TODO: Currently set to window.innerWidth and innerHeight elsewhere?
			width: undefined,
			height: undefined
		},
	});
	// console.log(stream);
	video.srcObject = stream;
	return new Promise((resolve) => {
		video.onloadedmetadata = () => {
			resolve(video);
		};
	});
}
const loadVideo = async () => {
	const video = await setupCamera();
	video.play();
	return video;
}
My HTML is:
<canvas id="videocanvas" width="100%" height="100%">
</canvas>
<video id="video"
			 autoplay
			 playsinline
			 muted
			 style="display:none"
			 width='100%'>
</video>
I'm using three.js to render a video texture, but also find this problem is demonstrable without this (see the Tensorflow PoseNet demo here https://github.com/tensorflow/tfjs-models/tree/master/posenet).
The error displayed in the Javascript console for all of these is:
[Error] A MediaStreamTrack ended due to a capture failure
[Error] Unhandled Promise Rejection: Error: The video element has not loaded data yet. Please wait for `loadeddata` event on the <video> element.
However, the BlazeFace demo works (see https://github.com/tensorflow/tfjs-models/tree/master/blazeface) yet appears to function in exactly the same way as the PoseNet code.
I've tested this on 2 different iPhone 11s and an iPhone 12 mini with the same issues.
All of the above code works on iOS 13.5 on an iPhone X without any issues.
What has changed here between iOS 13.5 and iOS 14?