How to attach point cloud(or depth data) to heic?

I'm developing 3D Scanner works on iPad.

I'm using AVCapturePhoto and Photogrammetry Session.

photoCaptureDelegate is like below:

extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {

	func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {

		let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
		let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
		let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
		let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
		let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
		try? fileData!.write(to: fileUrl, options: .atomic)
	}
}

But, Photogrammetry session spits warning messages:

Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!

The session creates a usdz 3d model but scale is not correct.

I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.

Did you ever get an answer to this problem? It's something we have struggled with since the iOS 17 release and it seems to still be a problem with the iOS 18 beta 3. We're using a TIF container for the Depth Map, feeding this into the PhotogrammetrySample, but still we get this error message. We also see incorrect scale in the resulting USDZ models, but the scale isn't consistently too small or too large ... it seems fairly random.

This just started happening to us in iOS 18. I noticed that HEIC files produced by the Object Capture API didn't have this problem, and it turns out the AVDepthData returned with AVCapturePhoto is in a disparity format, and needs to be converted to a depth format before retrieving it as a dictionary. I created an extension that handles this:

import Foundation

extension OSType {
    fileprivate func fourCCToString() -> String {
        let utf16 = [
            UInt16((self >> 24) & 0xFF),
            UInt16((self >> 16) & 0xFF),
            UInt16((self >> 8) & 0xFF),
            UInt16((self & 0xFF)),
        ]
        return String(utf16CodeUnits: utf16, count: 4)
    }
}

extension AVDepthData {
    public func formattedForPhotogrammetry() -> AVDepthData {
        if depthDataType == kCVPixelFormatType_DepthFloat32 {
            return self
        } else if canCovertToDepthFloat32() {
            print(
                "converting \(depthDataType.fourCCToString()) to \(kCVPixelFormatType_DepthFloat32.fourCCToString())"
            )
            return self.converting(
                toDepthDataType: kCVPixelFormatType_DepthFloat32)
        } else {
            return self
        }
    }

    public func canCovertToDepthFloat32() -> Bool {
        return availableDepthDataTypes.contains(kCVPixelFormatType_DepthFloat32)
    }
}

Now you can add this to an image destination, and get the Data() to write to a file:

import CoreImage
import Foundation

extension CGImageDestination {
    func addImage(from imageSource: CGImageSource) {
        CGImageDestinationAddImageFromSource(self, imageSource, 0, nil)
    }

    func addDepthData(from depthData: AVDepthData) {
        guard
            var depthDictionary = depthData.formattedForPhotogrammetry()
                .dictionaryRepresentation(
                    forAuxiliaryDataType: nil)
        else { return }

        // looking at images from ObjectCaptureSession, depth metadata isn't supplied,
        // so no sense in including it ourselves
        depthDictionary.removeValue(forKey: kCGImageAuxiliaryDataInfoMetadata)

        CGImageDestinationAddAuxiliaryDataInfo(
            self, kCGImageAuxiliaryDataTypeDepth,
            depthDictionary as CFDictionary)
    }

    func finalize() -> Bool {
        CGImageDestinationFinalize(self)
    }
}

Hope this helps.

This warning message is related to loading the internal LiDAR data saved by the ObjectCaptureSession front-end. You will see this warning if you do not use ObjectCaptureSession for capture. There is no public way to directly access or provide this raw LiDAR data -- you will need to use the Object Capture UI if you want the LiDAR improvements to textureless object reconstruction.

Using the AVDepthData dictionary as was shown in the original 2020 Object Capture release (and referred to by the examples here) is the public way to provide depth information to the reconstruction. This depth map may be from using stereo cameras to get disparity or it could be AVDepthData depth map derived from LiDAR. Note that regardless of which you provide, you will still see this warning about LiDAR. That said, if provided correctly the depth data will be used to help with scale recovery. Note that all 4 depth pixel formats can be loaded and used by the reconstruction, both half and full float disparity as well as depth, in the current release.

If you are providing depth or disparity data in the AVDepthData but there is still scale variance, please file a bug with Feedback Assistant so that our team can investigate.

How to attach point cloud(or depth data) to heic?
 
 
Q