I need to record an modified camera screen, i.e. detect some objects, mask them and save that masked video.
All samples I was looking at only demonstrated how to record a video, live detect some features and display the detected features.
i.e. use captureOutput(_, didOutput:from:) to modify the sample buffers and use that as AVCaptureInput for an AVCaptureSession
Has anybody an pointer where to look?
Another Q: What happens when a AVComposition is saved? Is the original video stream recorded and an effect track or is the video stream "rendered down"?
This sample project is a very good place to start: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/avmulticampip_capturing_from_multiple_cameras
It demonstrates how you can process the output of an AVCaptureSession using Metal, and then use an AVAssetWriter to save your processed output as a video to the photo library.