Hello!
I have recently begun exploring the "Capturing depth using the LiDAR camera" Documentation using AVFoundation and intend to acquire the depth information of specific points based on touch.
I have two main doubts and would be grateful if any clarification can be provided.
- How and in which format can I access the specific point/pixel information and ensure that it is tracking/displaying the accurate point I acquire from the touch gesture. [basically how are the pixels tagged to their specific data]
- What is the unit of measure for the LiDAR Depth data? also what is the range for which data accuracy is guaranteed?
it would be great if you could push me in the right direction in the search of answers. [I have gone through the documentation in depth]
Thanks in advance.