My goal is to implement a moving background in a virtual camera, implemented as a Camera Extension, on macOS 13 and later. The moving background is available to the extension as a H.264 file in its bundle.
I thought i could create an AVAsset from the movie's URL, make an AVPlayerItem from the asset, attach an AVQueuePlayer to the item, then attach an AVPlayerLooper to the queue player. I make an AVPlayerVideoOutput and add it to each of the looper's items, and set a delegate on the video output.
This works in a normal app, which I use as a convenient environment to debug my extension code. In my camera video rendering loop, I check self.videoOutput.hasNewPixelBuffer , it returns true at regular intervals, I can fetch video frames with the video output's copyPixelBuffer and composite those frames with the camera frames.
However, it doesn't work in an extension - hasNewPixelBuffer is never true. The looping player returns 'failed', with an error which simply says "the operation could not be completed". I've tried simplifying things by removing the AVPlayerLooper and using an AVPlayer instead of an AVQueuePlayer, so the movie would only play once through. But still, I never get any frames in the extension.
Could this be a sandbox thing, because an AVPlayer usually renders to a user interface, and camera extensions don't have UIs?
My fallback solution is to use an AVAssetImageGenerator which I attempt to drive by firing off a Task for each frame each time I want to render one, I ask for another frame to keep the pipeline full. Unfortunately the Tasks don't finish in the same order they are started so I have to build frame-reordering logic into the frame buffer (something which a player would fix for me). I'm also not sure whether the AVAssetImageGenerator is taking advantage of any hardware acceleration, and it seems inefficient because each Task is for one frame only, and cannot maintain any state from previous frames.
Perhaps there's a much simpler way to do this and I'm just missing it? Anyone?