RealityKit PhotogrammetrySession not recognizing depth scale

I'm creating a custom scanning solution for iOS and using RealityKit Object Capture PhotogrammetrySession API to build a 3D model. I'm finding the data I'm sending to it is ignoring the depth and not building the model to scale. The documentation is a little light on how to format the depth so I'm wondering if someone could take a look at some example files I send to the PhotogrammetrySession. Would you be able to tell me what I'm not doing correctly?

https://drive.google.com/file/d/1-GoeR_KMhX_i7-y8M8ElDRrySasOdoHy/view?usp=sharing

Thank you!

Replies

Hello,

  1. In the sample files provided, each set has *_depth.tiff, *_gravity.txt & *.png. This follows the data format generated with CaptureSample app, but instead of *.HEIC images, there are *.png images in your data. So it's likely that you did not use CaptureSample app. With the CaptureSample app, the depth data is embedded into the *.HEIC files, so you do not need to explicitly read the depth data and it would still produce the model to correct scale. If you generated the *_depth.tiff files yourself, see point 2 below.

  2. If you generated *_depth.tiff files by yourself, make sure the depth data is in either kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 format. You may follow the instruction for Creating Auxiliary Depth Data Manually. If you did follow these instructions and correctly created the depth data manually, see point 3 below.

  3. Make sure you are actually reading these files in your code to create the PhotogrammetrySample. If you simply keep the *_depth.tiff files in the input folder, but do not read the depthDataMap while creating the PhotogrammetrySample, then these depth files will not be used.

  • The captureSample app also saves the depth maps separately as tiff images, is this just for illustrative purposes, as from my understanding and your response above the heic images have the depth data embedded already and are used in this way by PhotogrammetrySampleApp (hence saving all the TIF files in the captureSample app is superfluous..?). Am I correct in my understanding? I have also created my own captureSample app (in objective C) with manual camera controls (which are pretty much essential!), from your response it appears I can save the memory and not save the additional TIF depth images.

    Also, given there is a response from someone from Apple staff two things: it's a shame the code of the gui reality capture with bounding box adjustment was not released as sample code (wwdc21-10076), and 2) can you bring back the iMessage USDZ thumbnail that was removed in iOS 15 for security reasons, people are afraid to open USDZ's now as they look a bit like malware...

  • I have a similar problem, I use ARKit to capture image and depth, but can't find gravity where to get, can you help me?

  • @Chengliang you get the gravity using

    import CoreMotion
    
    let motionManager = CMMotionManager()
    motionManager.deviceMotion?.gravity`
    
Add a Comment