iPad Pro 2020 - LiDAR - Depth Frame Data Access

The iPad Pro 2020 includes a LiDAR sensor that promises precise distance measurements in the form of "depth points" [Apple - Newsroom - Press Release - 2020-03-18 - Apple unveils new iPad Pro with breakthrough LiDAR Scanner and brings trackpad support to iPadOS].

I am looking for direct real time access to the depth data, "depth points", from the LiDAR sensor at full frame rate in the same way the depth data is provided from the TrueDepthCamera on the front of the iPad Pro and other iOS and iPadOS devices.

However, I have not been able to determine a method for retrieving the LiDAR depth data in an efficient way. As a result I am reaching out for clarification: is the LiDAR depth data directly available and if so how can it be accessed?


Apple Developer Resources for LiDAR
The following Sample Code: Visualizing and Interacting with a Reconstructed Scene, appears to be the only developer resource currently provided for utilizing the LiDAR depth data.

In the VisualizingSceneSemantics example, in the ViewController, the session callback function provides access to the current ARFrame. The ARFrame is also accessible from currentFrame on the ARSession property.
An ARFrame provides a AVDepthData property, capturedDepthData, however this depth data appears to always be nil or limited to a certain scenario, not consistently available.

However, the raycast function on the ARView can be used to find the world position for a x and y coordinate in the frame. But the raycast function appears to be computationally expensive and not practical for accessing more than a few points per frame.

In addition the color frame from the ARFrame is not in the color format desired for my use. As a result an expensive transformation is required to change the format on each frame.


AVFoundation - AVCaptureDevice

TrueDepthCamera, DualWideCamera, TimeOfFlightCamera?

AVFoundation exposes methods for handling camera sensor color and depth data.
The following Sample Code - Streaming Depth Data from the TrueDepth Camera provides an example and demonstration of the depth data provided by the TrueDepthCamera and the DualWideCamera. However, the LiDAR sensor depth data is notably missing from the AVFoundation methods for managing sensor data.

Strangely, the Sample Code - Streaming Depth Data from the TrueDepth Camera, works on the iPad Pro 2020 for the front camera, but when the AVCaptureDevice is switched to the back DualWideCamera it no longer offers depth data. The same code will however work on the iPhone 11, the DualWideCamera provides depth data.


Interestingly the Sample Code: Visualizing and Interacting with a Reconstructed Scene debug log reveals the following: "ARDepthSensor (AVCaptureDeviceTypeBuiltInTimeOfFlightCamera - Back)". The form of AVCaptureDeviceTypeBuiltInTimeOfFlightCamera matches the form of other AVCaptureDevice property names: AVCaptureDeviceTypeBuiltInDualWideCamera and AVCaptureDeviceTypeBuiltInTrueDepthCamera. However AVCaptureDevice.DeviceType does not appear to contain a builtInTimeOfFlightCamera as of iPadOS 13.5.


Conclusion

Ideally AVFoundation AVCaptureDevice.DeviceType would expose builtInTimeOfFlightCamera that would provide access to the LiDAR depth data in the same way as the TrueDepthCamera and the DualWideCamera. Is this being considered, is there a better method?

Accepted Reply

ARKit 4 introduces ARDepthData, providing the depth data from the LiDAR Scanner in a CVPixelBuffer on each ARFrame.
Find out more info from WWDC 20 in the Explore ARKit 4 video starting at 15:42.

As of WWDC 20, the depth data is still not formally available through AVFoundation and the builtInTimeOfFlightCamera AVCaptureDevice.DeviceType.

While ARKit is a fantastic tool, it includes some overhead that is not ideal for all applications.
The depth data appears to originate from AVFoundation before being sent to ARKit.
Ideally AVFoundation would provide the depth data in a similar manner to the way it provides the depth data from the TrueDepth Camera.
In addition it would be nice to have the confidenceMap available in ARKit for all depth data sources in AVFoundation such as the TrueDepthCamera and the LiDAR Scanner.

If you are reading this, please join me in filing an enhancement request with Apple Feedback Assistant for more depth data from AVFoundation.

Thanks!

Replies

Hello,


Please see this thread https://forums.developer.apple.com/message/412576, which clarifies what is and is not available.

ARKit 4 introduces ARDepthData, providing the depth data from the LiDAR Scanner in a CVPixelBuffer on each ARFrame.
Find out more info from WWDC 20 in the Explore ARKit 4 video starting at 15:42.

As of WWDC 20, the depth data is still not formally available through AVFoundation and the builtInTimeOfFlightCamera AVCaptureDevice.DeviceType.

While ARKit is a fantastic tool, it includes some overhead that is not ideal for all applications.
The depth data appears to originate from AVFoundation before being sent to ARKit.
Ideally AVFoundation would provide the depth data in a similar manner to the way it provides the depth data from the TrueDepth Camera.
In addition it would be nice to have the confidenceMap available in ARKit for all depth data sources in AVFoundation such as the TrueDepthCamera and the LiDAR Scanner.

If you are reading this, please join me in filing an enhancement request with Apple Feedback Assistant for more depth data from AVFoundation.

Thanks!