Camera Calibration Data

To get calibration data during video recording, I use AVCaptureDepthDataOutput together with AVCaptureVideoDataOutput, synchronized via AVCaptureDataOutputSynchronizer (using the dataOutputSynchronizer method of CameraController in the example).

The issue is that AVCaptureDepthDataOutput can only be used with .builtInLiDARDepthCamera, meaning it isn’t available for devices without LiDAR.

  1. Is it possible to obtain calibration data during video recording on devices without LiDAR, such as with .builtInWideAngleCamera?

  2. Can lensDistortionLookupTable and lensDistortionCenter be used to undistort the sampleBuffer I receive from AVCaptureVideoDataOutput?

Answered by DTS Engineer in 811858022

Hello @yury-podorozhnyy,

Thanks for reaching out about this!

The issue is that AVCaptureDepthDataOutput can only be used with .builtInLiDARDepthCamera, meaning it isn’t available for devices without LiDAR.

Actually, other capture devices that can capture depth data are able to support streaming depth data via AVCaptureDepthDataOutput. For example, the .builtInDualWideCamera supports this.

In general, you can check for support by iterating through the capture formats of a device:

        for format in device.formats {
            if !format.supportedDepthDataFormats.isEmpty {
                // Supports depth data.
            }
        }

Can lensDistortionLookupTable and lensDistortionCenter be used to undistort the sampleBuffer I receive from AVCaptureVideoDataOutput?

Yes, to see a reference implementation, take a look at the comments in AVCameraCalibrationData.h

Best regards,

Greg

Hello @yury-podorozhnyy,

Thanks for reaching out about this!

The issue is that AVCaptureDepthDataOutput can only be used with .builtInLiDARDepthCamera, meaning it isn’t available for devices without LiDAR.

Actually, other capture devices that can capture depth data are able to support streaming depth data via AVCaptureDepthDataOutput. For example, the .builtInDualWideCamera supports this.

In general, you can check for support by iterating through the capture formats of a device:

        for format in device.formats {
            if !format.supportedDepthDataFormats.isEmpty {
                // Supports depth data.
            }
        }

Can lensDistortionLookupTable and lensDistortionCenter be used to undistort the sampleBuffer I receive from AVCaptureVideoDataOutput?

Yes, to see a reference implementation, take a look at the comments in AVCameraCalibrationData.h

Best regards,

Greg

Echoing Greg, AVFoundation capture APIs can generate depth a few different ways. Rear facing, the choices are LiDARDepth, which joins the RGB and LiDAR depth sensors to create depth, or dual / dual wide cameras, which create depth through stereo disparity. On the front, we support depth on devices that have the TrueDepth camera (RGB + Infrared).

Camera Calibration Data
 
 
Q