I am making use of personSegmentationWithDepth in my app, and the actual occlusion in my RealityKit implementation works very well. But when I retrieve values from the estimatedDepthData for a particular point, it has trouble beyond a depth of about 1.5-1.75 meters. Once I get past that depth, it returns a depth value of 0 meters most of the time. With that said when it does return a value the value appears to be accurate.
I am working with a LiDAR-equipped iPad on a tripod running iOS 14 beta 2. Does the stillness of the tripod affect my limited data returned? And since estimatedDepthData predates iOS 13.5, I'm also wondering if it is taking advantage of the newer hardware.
I am working with a LiDAR-equipped iPad on a tripod running iOS 14 beta 2. Does the stillness of the tripod affect my limited data returned? And since estimatedDepthData predates iOS 13.5, I'm also wondering if it is taking advantage of the newer hardware.
You will benefit from LiDAR hardware, but estimatedDepthData will return depth values only for pixels that have been segmented as a person (that is, where the segmentationBuffer has a value other than 0 for the corresponding pixel).
In order to obtain depth values for the entire scene, you can add .sceneDepth to your configuration’s frame semantics and look at the sceneDepth property instead: https://developer.apple.com/documentation/arkit/arframe/3566299-scenedepth.
In order to obtain depth values for the entire scene, you can add .sceneDepth to your configuration’s frame semantics and look at the sceneDepth property instead: https://developer.apple.com/documentation/arkit/arframe/3566299-scenedepth.