recovering the real size of a mesh created with photogrammetry using depth map and gravity vector.

So, I've modified the CaptureSample IOS app to take photos using the truedepth front camera. It worked perfectly, and I have TIF depth maps together with the gravity vector and the photos I took.

Using the HelloPhotogrammetry command line, I created the meshes without any problems.

I notice the meshes have a consistent size between then, for example, creating a mesh of my face and a mesh of my nose, the nose mesh fits perfectly on top of the nose on the face mesh! Great!

BUT, when I open the meshes in Maya, for example, they are really really tiny!

I was expecting to see the objects in the proper scale, and hopefully bee able to even take measurements in maya to see if they would match the real measurements of the scanned object, but they don't seem to come on the right size at all. I tried set Maya to meters, centimetres and milimetres, but it always imports the meshes really tiny. I have to apply a scale of 100 to be able to see the meshes. But then they don't measure correctly. By try and error, I was able to find that scaling the meshes by 86 would make then match the real world scale in centimetres.

Is there a proper space conversion that needs to be applied to the mesh to convert it to the real world scale?

Would the problem be that I'm using the truedepth camera instead of the back camera, and the depth map value is coming in a different scale than what HelloPhotogrammetry expects?

Strange... have you tried opening your usdz files in another program like Blender? It's possible that this is a bug in Maya. One other thing you could try is to open the model using AR Quick Look, and see if it matches up to a real world object.

I've tried with Blender as well and got the same result as Maya. AR Quick Look method won't give me an accurate result. The goal here is to evaluate how well this new photogrammetry api in RealityKit can actually re-create a mesh in real world scale without any markers.

According to apple, they claim it does use the depth map from iOS devices to "capture" the real world scale, and we want to find out if it really works, and how precise it can be!

The fact that a few simple tests opening in 2 different 3d softwares doesn't bring the model in the correct real world scale is not a good start.

But it could be just a simple missing transformation that needs to happen on the depth maps captured by the CaptureSample app, so the scale is correct.

I'm going to build the CaptureSample original code (since this result is using the front TrueDepth camera instead of the back camera in the original code) and redo the photos and photogrammetry to see if the meshes came out scaled correctly.

If the meshes are correctly scale using the original code, at least I known the problem is the capture using the TrueDepth camera that is calling it wrong!

Hi, did you managed solving that issue? tnx.

recovering the real size of a mesh created with photogrammetry using depth map and gravity vector.
 
 
Q