How to use Depth in Object Capture API ?

OS: 12.0 Beta

I'm interested in the Object Capture API and am trying to get the sample app to work.

https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture

https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app/

After trying a few times, I noticed that the output doesn't change much even if Depth (.TIF) and Gravity (.TXT) aren't in the folder.

I wanted to use Depth, so I tried using PhotoGrametorySample. Because, I noticed PhotoGrametorySample.depthDataMap

The session was successfully created by using the same CVPixelBuffer as this. AVCapturePhoto.depthData.converting (toDepthDataType: kCVPixelFormatType_DisparityFloat32) .depthMapData

However, the output from the session is .invalidSample only for the id containing the depth.

[command-line app log]

Successfully created session. (PhotogrammetrySample API)
Using request: modelFile(url: ***, detail: RealityFoundation.PhotogrammetrySession.Request.Detail.full, geometry: nil)
Invalid Sample! id=1  reason="The sample is not supported."
Invalid Sample! id=2  reason="The sample is not supported."
Invalid Sample! id=3  reason="The sample is not supported."
Invalid Sample! id=4  reason="The sample is not supported."
...

What is this reason ? "The sample is not supported."

Is there sample code to use Depth in the process?

@Kazuki_Fujimura have you figured it out?

Take a look at this post: https://developer.apple.com/forums/thread/697968

I copied/pasted my code to read images and convert to color/depth/disparity CVPixelBuffer.

I'm still trying to figure out the mask component though. I'm getting the "sample is not supported" now when I add the mask... There must be some setup that needs to happen on the mask CVPixelBuffer (like for disparity/depth), but there's no documentation whatsoever.

How to use Depth in Object Capture API ?
 
 
Q