Developer information about the new LiDAR

Hello fellow devs,


First post. Have tried searching docs and forums, but haven't found any useful specs on the new LiDAR capability. Needless to say, this is an amazing and revolutionary development - would like to know how/when this will be available for us as developers to start working on?


We have an existing CAD Viewer app that would benefit tremendously from the increase precision and stability that this might provide to AR Kit.


Any pointers where to start looking would be most appreciated! Cheers!

All I can find is that it's called: "Scene Geometry API" I've searched the Xcode BETA but it has no reference to it

When Xcode 11.4 GM is available, check the ARKit developer documentation > "World Tracking" section for a new article.


UPDATE: See Visualizing and Interacting with a Reconstructed Scene

Some discussions on Twitter indicate that the LiDAR depth data will not be accessible. Can you say anything about this?

Correct. The depth data that ARKit uses to create the scene mesh (ARMeshAnchors) is not available as of Xcode 11.4.

will the

ARMeshAnchor

mesh generated be in full color like a 3d scan?

Will the LiDAR depth data be accessible in the future?

Unfortunately, Apple can't promise future features. So, I recommend you check future Xcode-releases/docs for the features you hope for.

ARKit provides the scene mesh (ARMeshAnchors) with classifications, only. ARView's debug visualization of the scene mesh is in a full-color spectrum based on depth, however, neither ARKit nor RealityKit expose any mesh color-information. So you cannot, for example, read any RGB data associated with the scene mesh.

Apple claims that their LiDAR sensor can scan a room & objects up to 5 meters away. Do we know how close objects can be from the sensor? I didn't see any info on the dev site.

Curious about the use case. What are the close-up objects, and how big are they (and their features)?

This is for face scanning. Ideally from 20-30cm for our use case. We can't use the front facing sensor as the face scanning is performed by someone else.

There is an element of filling in the data gaps that will come into play at that close range, but you'll need to step back an additional 30cm, or so, for ARKit to sculpt confident geometry this close to the subject.

Is there a plan of adding face tracking capability to the rear-facing camera by using the LiDAR sensor?

Like lelouisdeville, I would like to use rear-camera to determine face mesh and/or orientation.

I can't promise or talk about future stuff, but if you submit this inquiry using Feedback Assistant, it'll cast a vote in favor of something like this. Apple considers frequently-requested features!

Thank you Bob,

I've tried looking elsewhere but is Face Tracking currently the only way to get the x/y/z axes of the face?

or is there somewhere I could look?

Developer information about the new LiDAR
 
 
Q