Access to Raw Lidar point cloud

Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using:

        guard let frame = arView.session.currentFrame,
              let depthData = frame.sceneDepth?.depthMap else {
            print("Depth data is unavailable.")
            return
        }

but this is the depth data after sensor fusion occurs and fails in low light conditions.

I don't think there's any API for that, but I'm not entirely sure (sso don't count on this). But either way, the sensor fusion algorithms are there to improve it, so I'm not sure if you would get better results from that.

and fails in low light conditions

Do you think raw depth data would be better than the sensor fusion-processed data?

Feedback FB15735753 is filed.

Any data processing (through HW or SW) makes the original information lose irreversibly.

The data processing steps:

  1. Acquisition of ‘sparse’ 576 raw LiDAR distance points even in dark lighting (No API. R1 chip inside?)
  2. Interpolation of the 576 distance points with RGB image, producing ‘dense’ 256x192 depthMap image of 60 Hz (API in iOS)
  3. Generating and updating ‘sparse’ MeshAnchor of about 2 Hz from depthMap (API in iOS and visionOS).

Review on the data processing:

  • 576 raw LiDAR distance points are original.
  • Object edges and textures cause artefacts in depthMap image.
  • Low lighting conditions make the existing original information lose.
  • Data density of sparse -> dense –> sparse.
  • In summary, 576 raw LiDAR distance points are preferable to MeshAnchor.

The numbers of 576, 256x192, 60 Hz, ..., and, the steps of data processing.

  • WWDC 2020 Video, Scene Geometry (15m42s):

https://developer.apple.com/kr/videos/play/wwdc2020/10611/

  • What kind of lidar cameras does apple use:

https://developer.apple.com/forums/thread/692724?answerId=692054022#692054022 .

The steps of data processing:

  1. Laser 576 distance points are originals.
  2. Interpolation of 576 points with RGB image to depthMap.
  3. MeshAnchor from depthMap.

We expect a measurement accuracy of 2-3 mm in 576 points, but 10-15 mm in vertex points of MeshAnchor.

We prefer 576 points to vertex points of MeshAnchor. Both are sparse.

The source code of visionOS apps is available on GitHub CurvSurf, processing vertex points of MeshaAnchor.

https://github.com/CurvSurf/FindSurface-visionOS

Access to Raw Lidar point cloud
 
 
Q