I'm trying to create a camera capture session that has the following features:
Currently, I use a single AVCaptureVideoDataOutput and run the video recording first, then the frame processing in the same function, in the same queue. (see code here and here)
The only problem with this is that the video capture captures 4k video, and I don't really want the frame processor to receive 4k buffers as that is going to be very slow and blocks the video recording (frame drops).
Ideally I want to create one AVCaptureDataOutput for 4k video recording, and another one that receives frames in a lower (preview?) resolution - but you cannot use two AVCaptureDataOutputs in the same capture session.
I thought maybe I could "hook into" the Preview layer to receive the CMSampleBuffers from there, just like the captureOutput(...) func, since those are in preview-sized resolutions, does anyone know if that is somehow possible?
Preview
Photo capture
Video capture
Realtime frame processing (AI)
Currently, I use a single AVCaptureVideoDataOutput and run the video recording first, then the frame processing in the same function, in the same queue. (see code here and here)
The only problem with this is that the video capture captures 4k video, and I don't really want the frame processor to receive 4k buffers as that is going to be very slow and blocks the video recording (frame drops).
Ideally I want to create one AVCaptureDataOutput for 4k video recording, and another one that receives frames in a lower (preview?) resolution - but you cannot use two AVCaptureDataOutputs in the same capture session.
I thought maybe I could "hook into" the Preview layer to receive the CMSampleBuffers from there, just like the captureOutput(...) func, since those are in preview-sized resolutions, does anyone know if that is somehow possible?