I've built an app with ARKit 3.5 previously.
With [configuration.sceneReconstruction = .mesh],
I put all meshAnchors to 3D models.
Do I able to filter this meshAnchors by confidence,
and add color data from camera feed?
Or with demo code with MetalKit, how could I able to convert point cloud into 3d model?
With [configuration.sceneReconstruction = .mesh],
I put all meshAnchors to 3D models.
Do I able to filter this meshAnchors by confidence,
and add color data from camera feed?
Or with demo code with MetalKit, how could I able to convert point cloud into 3d model?
Hello HeoJin!
Confidence information is exposed in ARDepthData's confidenceMap property as part of the new depth API in ARKit 4. You can use it to filter the pixels of the depth map provided in the sceneDepth property.
The scene depth API introduced with ARKit 3.5 leverages the same depth data, but ARKit already provides an optimized mesh with the ARMeshAnchors so that you don't have to deal with confidence.
If you want to color the mesh based on the camera feed, you could do so manually, for example by unprojecting the pixels of the camera image into 3D space and color the according mesh face with the pixel's color. However, keep in mind that ARMeshAnchors are constantly updated. So you might want to first scan the entire area you're interested in, then stop scene reconstruction, and do the coloring in a subsequent step.
Confidence information is exposed in ARDepthData's confidenceMap property as part of the new depth API in ARKit 4. You can use it to filter the pixels of the depth map provided in the sceneDepth property.
The scene depth API introduced with ARKit 3.5 leverages the same depth data, but ARKit already provides an optimized mesh with the ARMeshAnchors so that you don't have to deal with confidence.
If you want to color the mesh based on the camera feed, you could do so manually, for example by unprojecting the pixels of the camera image into 3D space and color the according mesh face with the pixel's color. However, keep in mind that ARMeshAnchors are constantly updated. So you might want to first scan the entire area you're interested in, then stop scene reconstruction, and do the coloring in a subsequent step.