The number of the current raw laser points is 576 (squared 24x24 = 4x16x3x3. iPad Pro LiDAR, iPhone 12 Pro, iPhone 12 Pro Max).
The advanced ML algorithm of Apple interpolating the 576 depth points together with RGB image generates the 256x192 depth map, a decision of compromise.
The number 576 and 256x192 may be increased in the future.
You can enjoy the current performance of 256x192 depth map of 60 Hz.
Super fast real-time detection of planes / spheres / cylinders / cones / tori in DepthMap
Post
Replies
Boosts
Views
Activity
The rawFeaturePoints generated without LiDAR is relatively inaccurate, unreliable, and unstable.
A stable and accurate real-time detection of planes, spheres, cylinders, cones, tori is possible by using LiDAR.
https://github.com/CurvSurf/FindSurface-GUIDemo-iOS
FindSurface-GUIDemo-iOS (Swift)
Hi,
It may be worth to test "real-time recognition and measurement of planes / spheres / cylinders / cones / tori in point cloud generated with LiDAR".
The ToF (time of flight) 3D-camera of iPad Pro 2020/2021, iPhone Pro 12/13 and iPhone Pro Max 12/13 has physically 64 (16 stacks of rod of 4 cells) VCSELs (vertical cavity surface emitting laser). The 64 laser pulses are multiplied with 3x3 to 576 by a DOE (diffraction optical element). The 576 rebounded laser pulses from object surfaces are detected and the individual time elapses are measured by SPAD (single-photon avalanche diode) image sensor. The 576 depth points are interpolated with RGB images to the 256x192 depthMap of 60 Hz. Apple has released the access API to the 256x192 depthMap but not to the 576 depth points.
The slope of an object surface can be calculated as below:
Plane fitting to the LiDAR measurement points of the surface
The slope is the ratio of horizontal/vertical of the plane normal.
The slope sign as shown in the attached picture is rendered based on:
A gazing point on the object surface
The surface normal at the gazing point.
The source codes for detecting and measuring the geometric primitives of planes, spheres, cylinders, cones, tori from LiDAR points are available.
The slope sign as shown in the previous answer is rendered based on:
A gazing point on the object surface
The surface normal at the gazing point
The vertical (gravitational) direction.
Once, the shape, size, position, orientation of an object surface is known, there are a variety of applications.
YouTube CurvSurf,
GitHub CurvSurf,
FindSurface Web Demo.
2022-01-07
Adding the third-person view and the accuracy controls.
Instead of fixed values for measurement accuracy and mean point distance, the following formulas are used:
- Measurement accuracy = base + increment * (object distance)
- Mean point distance = increment * (object distance).
It's a real valuable contribution to the AR world.
It may be possible, only if you know the shape, size, position, and orientation of your target object surface, e.g. floor, wall, ceiling, column, etc.
https://github.com/CurvSurf/FindSurface-SceneKit-ARDemo-iOS
The source code for AR overlaying/rendering virtual image/video around/on the real geometric object primitives extracted from the LiDAR point cloud is available.
#spatialcomputing #realtime #automation #robotics #ar #computervision #surface #fitting #pointcloud #curvature #differentialgeometry #linearalgebra #leastsquares #odf #orthogonaldistancefitting
How to accurately estimate in real-time the shape, size, position, rotation of an object surface from point cloud (measurement points) has been the Holy Grail of computer vision.
The run-time library is now available as a middleware of a file size of about 300 KB.
Locally differentiable surfaces can be classified as one of the 4 surface types:
planar
parabolic
elliptic
hyperbolic.
Most man-made object surfaces are composed of planes, spheres, cylinders, cones, and tori:
Plane is planar
Sphere is elliptic
Cylinder is parabolic
Cone is parabolic
Torus is locally elliptic, hyperbolic, or parabolic (seldom).
Then, through the local curvature analysis of the point cloud measured, we can assume the local shape of the measurement object:
Planar --> plane
Parabolic --> cylinder or cone
Elliptic --> sphere or torus
Hyperbolic --> torus.
By investigating the shape parameters of cone (vertex angle) and torus (mean and tube radius) fitted to the point cloud
measured, we can refine the object shape type between sphere, cylinder, cone, and torus.
After a successful cone fitting, we can recognize between cylinder and cone by investigating the vertex angle of the cone fitted.
After a successful torus fitting, we can recognize between sphere, cylinder and torus by investigating the tube radius and the mean radius of the torus fitted.
Hi James,
The source code of an ARKit demo App from us may be helpful.
Once you understand and can access the depthMapp, you may like to pick a point around the screen center.
The red circle at the screen center defines the search area for the point you look for. The radius of the red circle is to be given in pixel (of camera image) and to be converted as the vertex angle of a search cone.
func pickPoint(rayDirection ray_dir: simd_float3, rayPosition ray_pos: simd_float3, vertices list: UnsafePointer<simd_float4>, count: Int, _ unitRadius: Float) -> Int {
let UR_SQ_PLUS_ONE = unitRadius * unitRadius + 1.0
var minLen: Float = Float.greatestFiniteMagnitude
var maxCos: Float = -Float.greatestFiniteMagnitude
var pickIdx : Int = -1
var pickIdxExt: Int = -1
for idx in 0..<count {
let sub = simd_make_float3(list[idx]) - ray_pos
let len1 = simd_dot( ray_dir, sub )
if len1 < Float.ulpOfOne { continue; } // Float.ulpOfOne == FLT_EPSILON
// 1. Inside ProbeRadius (Picking Cylinder Radius)
if simd_length_squared(sub) < UR_SQ_PLUS_ONE * (len1 * len1) {
if len1 < minLen { // find most close point to camera (in z-direction distance)
minLen = len1
pickIdx = idx
}
}
// 2. Outside ProbeRadius
else {
let cosine = len1 / simd_length(sub)
if cosine > maxCos { // find most close point to probe radius
maxCos = cosine
pickIdxExt = idx
}
}
}
return pickIdx < 0 ? pickIdxExt : pickIdx
}
There are 3 cases:
If there are at least 1 depthMap points inside the view cone, the top most point to the camera COP will be chosen.
Else if there are at least 1 depthMap points outside the view cone, the closest point to the red circle in screen will be chosen.
Otherwise, there is no point from depthMap (empty depthMap).
By adjusting the radius of the red circle, you can control the precision of picking a point.
CurvSurf