Does estimatedDepthData take advantage of LiDAR?

I am making use of personSegmentationWithDepth in my app, and the actual occlusion in my RealityKit implementation works very well. But when I retrieve values from the estimatedDepthData for a particular point, it has trouble beyond a depth of about 1.5-1.75 meters. Once I get past that depth, it returns a depth value of 0 meters most of the time. With that said when it does return a value the value appears to be accurate.

I am working with a LiDAR-equipped iPad on a tripod running iOS 14 beta 2. Does the stillness of the tripod affect my limited data returned? And since estimatedDepthData predates iOS 13.5, I'm also wondering if it is taking advantage of the newer hardware.

Accepted Reply

You will benefit from LiDAR hardware, but estimatedDepthData will return depth values only for pixels that have been segmented as a person (that is, where the segmentationBuffer has a value other than 0 for the corresponding pixel).

In order to obtain depth values for the entire scene, you can add .sceneDepth to your configuration’s frame semantics and look at the sceneDepth property instead: https://developer.apple.com/documentation/arkit/arframe/3566299-scenedepth.

Replies

You will benefit from LiDAR hardware, but estimatedDepthData will return depth values only for pixels that have been segmented as a person (that is, where the segmentationBuffer has a value other than 0 for the corresponding pixel).

In order to obtain depth values for the entire scene, you can add .sceneDepth to your configuration’s frame semantics and look at the sceneDepth property instead: https://developer.apple.com/documentation/arkit/arframe/3566299-scenedepth.
Yes, thanks, I recognize the estimated depth data is specific to people in the segmentation buffer. I have had some strange experiences yesterday--I turned on mesh scene reconstruction and the showSceneUnderstanding flag on my iPad running iOS 14 beta, and all of the sudden I got a continuous stream of of estimated depth data to a depth of around 4-5 meters. But then I tried it again with a new build, and I got literally no estimated depth data (i.e. all depth values were 0). I restarted my device and Xcode and again got a steady stream for a single build and then it again showed no data. Once I get a better handle on it I will file a bug report, but it did strike me that turning on one or both of those flags seemed to "wake up" the LiDAR for the estimated depth data. One other important note: in another test, I tried having someone hold the iPad instead of using a tripod, and that also seemed to improve my estimated depth data info, again suggesting that the LiDAR is not activated at least by default for estimated depth data.
After further testing, I think my strange experiences may have been unrelated to scene reconstruction. When I tested for person depth at a point more in the middle of the body, as opposed to at the feet where I had been testing, I got more consistent data quality.