Post

Replies

Boosts

Views

Activity

Reply to ARKit Lidar more accurate than AVFoundation LiDAR
As far as I observed, Apple has been improving the SOFTWARE performance of LiDAR and ARKit since their release in 2020 and 2017, respectively. The overall performance of the combination of Apple's LiDAR and ARKit can be checked by using CurvSurf's FindSurface runtime library: 3D measurement accuracy of LiDAR Robustness of DepthMap against darkness Accuracy of the motion tracking of ARKit Robustness of the motion tracking of ARKit against darkness and device's shaking. CurvSurf's FindSurface runtime library determines the shape, size, position, and orientation of an object surface by processing the 3D measurement points according to the orthogonal distance fitting. The accuracy of the orthogonal distance fitting algorithms adopted by CurvSurf's FindSurface runtime library is approved by the German PTB according to ISO 10360-6 (see my PhD thesis, ISBN 3540239669, 2004). If there are somewhat shortages in the overall performance of Apple's LiDAR and ARKit mentioned above, we will observe some misalignment between the real object surfaces and the virtual ads. In the same way, we can check the overall performance of '3D measurement and motion tracking' of Google ARCore and Microsoft HoloLens. Virtual Ads inside Chungmuro station Line 3 - iPhone Pro 12: YouTube Video: BmKNmZCiMkw The source code of the App producing the above video is available on GitHub CurvSurf: FindSurface-SceneKit-ARDemo-iOS
Feb ’23
Reply to how to align 3d model in real world
Another example of aligning 3D models to real object surfaces. YouTube: BmKNmZCiMkw Once we know the shape, size, position, orientation of an object surface, we can render virtual object models/videos on/around it. Tools used: iPhone 12 Pro Apple ARKit Apple SceneKit Apple Metal API CurvSurf FindSurface runtime library.
Feb ’23
Reply to how to align 3d model in real world
It’s a difficult problem of robot vision, computer vision, machine vision, computer graphics, … You have made a model. It must have shape, size, position, orientation. You like to attach it to a real object surface. Also the real object surface has shape, size, position, orientation. How to align your model to the real object surface front of you is a problem not solved cleverly yet last 60 year since the invention of robot. Last night, I have captured a video. Virtual cylindrical models are attached to real object surfaces. YouTube: X2CPx9zuxZ0 The source code of the App is available on GitHub CurvSurf FindSurface-SceneKit-ARDemo-iOS (Swift)
Jan ’23
Reply to Arkit 4.0 and Body measurement
Apple's Measure App must be based on: The device's 6DoF The floor plane The outline contour of the object person in image The LiDAR data of the object person. Then, according to the principle of triangulation, we could determine the height of a person from the floor to the head-top. Apple should open the source code of Measure. Joon
Jan ’23
Reply to Accessing and displaying LiDAR depth data
Hi James, We are afraid that we couldn’t reproduce what you have experienced. It might be useful for us to help you if you provide more descriptions such as error messages. Do you see any error messages? Meanwhile, there is a checklist that might be helpful to deal with your issue: Download the released library framework (https://github.com/CurvSurf/FindSurface-iOS/releases). Unzip the downloaded file. In the unzipped content, what you need is a directory named “FindSurfaceFramework.framework” containing files and folders such as “Modules (folder), Info.plist, Headers (folder), and FindSurfaceFramework (no extension)”. The directory itself acts as the framework package file. While you import the framework to your project by following the instructions (https://github.com/CurvSurf/FindSurface-iOS/blob/master/How-to-import-FindSurface-Framework-to-your-project.md): 3.1 In Step 2, after clicked “Add Files…”: Click “Open” while you selected the directory “FindSurfaceFramework.framework”. Remember, the directory itself acts as the framework file. 3.2 In Step 3: Add the path where the framework directory exists. It might look like it already has some paths in the items, but you must manually add the path if you haven’t. For example, if the path goes like “/Users/username/Downloads/FindSurfaceFramework.framework” then you have to set it to be “/Users/username/Downloads/”. Just in case, check the following documents if those are relevant to your issue: 4.1 https://developer.apple.com/documentation/xcode/enabling-developer-mode-on-a-device. 4.2 https://support.apple.com/en-us/HT204460
Dec ’22
Reply to Accessing and displaying LiDAR depth data
Hi James, The source code of an ARKit demo App from us may be helpful. Once you understand and can access the depthMapp, you may like to pick a point around the screen center. The red circle at the screen center defines the search area for the point you look for. The radius of the red circle is to be given in pixel (of camera image) and to be converted as the vertex angle of a search cone. func pickPoint(rayDirection ray_dir: simd_float3, rayPosition ray_pos: simd_float3, vertices list: UnsafePointer<simd_float4>, count: Int, _ unitRadius: Float) -> Int { let UR_SQ_PLUS_ONE = unitRadius * unitRadius + 1.0 var minLen: Float = Float.greatestFiniteMagnitude var maxCos: Float = -Float.greatestFiniteMagnitude var pickIdx : Int = -1 var pickIdxExt: Int = -1 for idx in 0..<count { let sub = simd_make_float3(list[idx]) - ray_pos let len1 = simd_dot( ray_dir, sub ) if len1 < Float.ulpOfOne { continue; } // Float.ulpOfOne == FLT_EPSILON // 1. Inside ProbeRadius (Picking Cylinder Radius) if simd_length_squared(sub) < UR_SQ_PLUS_ONE * (len1 * len1) { if len1 < minLen { // find most close point to camera (in z-direction distance) minLen = len1 pickIdx = idx } } // 2. Outside ProbeRadius else { let cosine = len1 / simd_length(sub) if cosine > maxCos { // find most close point to probe radius maxCos = cosine pickIdxExt = idx } } } return pickIdx < 0 ? pickIdxExt : pickIdx } There are 3 cases: If there are at least 1 depthMap points inside the view cone, the top most point to the camera COP will be chosen. Else if there are at least 1 depthMap points outside the view cone, the closest point to the red circle in screen will be chosen. Otherwise, there is no point from depthMap (empty depthMap). By adjusting the radius of the red circle, you can control the precision of picking a point. CurvSurf
Dec ’22
Reply to Lidar
After a successful cone fitting, we can recognize between cylinder and cone by investigating the vertex angle of the cone fitted. After a successful torus fitting, we can recognize between sphere, cylinder and torus by investigating the tube radius and the mean radius of the torus fitted.
Oct ’22
Reply to Lidar
#spatialcomputing #realtime #automation #robotics #ar #computervision #surface #fitting #pointcloud #curvature #differentialgeometry #linearalgebra #leastsquares #odf #orthogonaldistancefitting How to accurately estimate in real-time the shape, size, position, rotation of an object surface from point cloud (measurement points) has been the Holy Grail of computer vision. The run-time library is now available as a middleware of a file size of about 300 KB. Locally differentiable surfaces can be classified as one of the 4 surface types: planar parabolic elliptic hyperbolic. Most man-made object surfaces are composed of planes, spheres, cylinders, cones, and tori: Plane is planar Sphere is elliptic Cylinder is parabolic Cone is parabolic Torus is locally elliptic, hyperbolic, or parabolic (seldom). Then, through the local curvature analysis of the point cloud measured, we can assume the local shape of the measurement object: Planar --> plane Parabolic --> cylinder or cone Elliptic --> sphere or torus Hyperbolic --> torus. By investigating the shape parameters of cone (vertex angle) and torus (mean and tube radius) fitted to the point cloud measured, we can refine the object shape type between sphere, cylinder, cone, and torus.
Oct ’22
Reply to Lidar
The source code for AR overlaying/rendering virtual image/video around/on the real geometric object primitives extracted from the LiDAR point cloud is available.
Sep ’22