I believe you would need to run the AVPlayer instance in the application where the Camera Extension is hosted and then send the video frames using a custom property to the sandboxed camera extension. You can do the compositing inside the camera extension but my recommendation would be to do all the video processing inside the application and use the extension only to vend CVPixelBuffers to the camera clients.
Post
Replies
Boosts
Views
Activity
SceneKit allows for rendering into a texture so my recommendation would be to create a FBO (or under Metal, a Metal texture render target) and render your Scene into what ever texture resolution you desire (or meets your performance requirements).
Then, draw the contents of the FBO to the screen.
A good starting point for this would be Dan Omachi's sample code "Mixing Metal and OpenGL in a View".
Utilize the technique of rendering to a FBO backed by a CVPixelBufferRef. You can then take this CVPixelBufferRef and use it as the contents of any CALayer, including the CALayer of a basic NSView.
In my testing the CALayer.contents will not redraw correctly if the CVPixelBufferRef is set repeatedly to the same value so you must double-buffer your rendering (ie. use dual CVPixelBufferRef).