Acquisition of specific point[pixel] LiDAR depth information

Hello!

I have recently begun exploring the "Capturing depth using the LiDAR camera" Documentation using AVFoundation and intend to acquire the depth information of specific points based on touch.

I have two main doubts and would be grateful if any clarification can be provided.

  1. How and in which format can I access the specific point/pixel information and ensure that it is tracking/displaying the accurate point I acquire from the touch gesture. [basically how are the pixels tagged to their specific data]
  2. What is the unit of measure for the LiDAR Depth data? also what is the range for which data accuracy is guaranteed?

it would be great if you could push me in the right direction in the search of answers. [I have gone through the documentation in depth]

Thanks in advance.

Hello, I hope I help you.

The overall process will be: LiDAR data collection -> Touch a screen point -> Find the point closest to the touchscreen point from the LiDAR data.

The unit of measurement for LiDAR depth data is meters, i.e. [m]. The relatively reliable distance range for which data accuracy is guaranteed is between about 0.3 and 4 meters.

function _pickPoint() from GitHub/CurvSurf

Hello JoonAhn!

Thank you for your swift response. It is helpful.

I understand the overall process involved in acquiring the depth data via the singular point but until now I haven't been able to pinpoint the specific details of the touched point in regard to its LiDAR data.

I paid a visit to the GitHub link you provided and I am trying to understand the pickPoint function and how it can be used. Although a little tricky, I understood that I would need to fix a probeRadius with a unit radius length and then use certain length and cosine equations while also assigning the PickIdx to the idx for acquisition.

Please let me know if you have any more suggestions or advice while I try to edit the code and display a specific pixel's LiDAR data.

Thanks for the help once again.

Hi TSHKS,

I summarize the picking a 3D point in 2D screen as below.

How to Pick a 3D Point in 2D Screen (https://github.com/CurvSurf/ARKitDepthFindSurfaceWeb):

  • You aim your device at object surface. Your touch point is in the screen center.
  • The probe radius [pixel] on screen is converted to physical radius at unit distance from ray_pos. unitRadius represents the half vertex angle of the “view cone”.
  • Squared slope length: UR_SQ_PLUS_ONE = unitRadius * unitRadius + 1.0 .
  • len1 = distance of list[idx] from ray_pos in the direction of ray_dir .
  • Squared slope length at distance len1 is UR_SQ_PLUS_ONE * (len1 * len1).
  • If the squared distance of list[idx] from ray_pos is smaller than the squared slope length at len1, the point list[idx] is viewed inside the “view cone”.
  • Orthographic version “view cylinder (pipe)” is used by https://developers.curvsurf.com/WebDemo/

Hope this helps....

The set of points list[] is a collection of 3D points from:

  • ARPointCloud https://youtu.be/4U4FlavRKa4 Apple ARKit: AR Based on Curved Object Surfaces.
  • vertices of ARMeshAnchor https://youtu.be/JSNXB3zc4mo Fast Cylinder Fitting - Apple iPad Pro LiDAR.
  • ARDepthData https://youtu.be/zc6GQOtgS7M Real Time Ball Tracking in depthMap - Apple iPad Pro LiDAR.
Acquisition of specific point[pixel] LiDAR depth information
 
 
Q