How about to try:
Web Demo:
https://developers.curvsurf.com/WebDemo/
iOS app (iPhone Pro, iPad Pro):
https://github.com/CurvSurf/FindSurface-iOS
visionOS app (Vision Pro):
https://github.com/CurvSurf/FindSurface-visionOS
Post
Replies
Boosts
Views
Activity
The source code of visionOS apps is available on GitHub CurvSurf, processing vertex points of MeshaAnchor.
https://github.com/CurvSurf/FindSurface-visionOS
The numbers of 576, 256x192, 60 Hz, ..., and, the steps of data processing.
WWDC 2020 Video, Scene Geometry (15m42s):
https://developer.apple.com/kr/videos/play/wwdc2020/10611/
What kind of lidar cameras does apple use:
https://developer.apple.com/forums/thread/692724?answerId=692054022#692054022 .
The steps of data processing:
Laser 576 distance points are originals.
Interpolation of 576 points with RGB image to depthMap.
MeshAnchor from depthMap.
We expect a measurement accuracy of 2-3 mm in 576 points, but 10-15 mm in vertex points of MeshAnchor.
We prefer 576 points to vertex points of MeshAnchor. Both are sparse.
We are interested in real-time detection and measurement of object geometries in 3D points.
We hope that interface issues will be solved at iOS or VisionOS level.
User intent changes quickly and is sometimes misinterpreted, e.g. by mouse, eye, device or hand tracking. Recently we added an ad-hoc solution with confirmation dialog to our visionOS app.
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Real-Time
However, the better solution would be to consider the speed and acceleration of the mouse, eye, device or hand tracking.
Feedback FB15735753 is filed.
Any data processing (through HW or SW) makes the original information lose irreversibly.
The data processing steps:
Acquisition of ‘sparse’ 576 raw LiDAR distance points even in dark lighting (No API. R1 chip inside?)
Interpolation of the 576 distance points with RGB image, producing ‘dense’ 256x192 depthMap image of 60 Hz (API in iOS)
Generating and updating ‘sparse’ MeshAnchor of about 2 Hz from depthMap (API in iOS and visionOS).
Review on the data processing:
576 raw LiDAR distance points are original.
Object edges and textures cause artefacts in depthMap image.
Low lighting conditions make the existing original information lose.
Data density of sparse -> dense –> sparse.
In summary, 576 raw LiDAR distance points are preferable to MeshAnchor.
We are currently developing a set of visionOS apps that detect and measure object surface geometries from MeshAnchor.
https://github.com/CurvSurf/FindSurface-visionOS
FindSurfaceST (Spatial Tap): Object surface detection by spatial tap
FindSurfaceRR (Responce-to-Request): autonomous object surface detection
FindSurfaceRT (Real-Time): Real-time object surface detection
FindSurfaceAD (Ads): rendering photos/videos on detected object surfaces. The corresponding iOS app is here; https://github.com/CurvSurf/FindSurface-SceneKit-ARDemo-iOS
The source code of the FindSurfaceAD app is planned to be released in December 2024. Photos/videos are planned to be selected in the Photos app.
Please keep GitHub CurvSurf running.
The source code below includes 'Eye tracking (spatial tap)', 'Device tracking', and 'Hand tracking'.
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request
It's basically a ray-casting problem.
User-intention ray is:
Eye tracking
Device tracking
Hand tracking.
The source code of an example app of device tracking:
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Real-Time
The source code of real-time app is available:
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Real-Time
This is a minimized and optimized version of https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request
The speed of object surface extraction and measurement:
Plane: 600 Hz
Sphere/cylinder: 300 Hz
Cone/torus: 100 Hz.
The FindSurfaceFramework for iOS basically requires a point set generated by a scanner (e.g. LiDAR) or even manually.
How to collect rawFeaturePoints from ARKit:
https://github.com/CurvSurf/ARKitPointCloudRecorder
https://developer.apple.com/documentation/arkit/arframe/2887449-rawfeaturepoints
Once a point set is prepared, it can be fed to FindSurfaceFramework.
The following videos demonstrate object occlusion by detecting and measuring object geometry from the provided point set:
3-D Augmented Reality - Apple ARKit (2018), https://youtu.be/FzdrxtPQzfA
Lee JaeHyo Gallery Ball Park - ARKit (2019),
https://youtu.be/QhBtGHmfBOg
Apple ARKit: Occlusion Tree Trunk (2019),
https://youtu.be/rGW-FtA6P1Q
Apple ARKit: Augmented Reality Based on Curved Object Surfaces (2019),
https://youtu.be/4U4FlavRKa4
Apple ARKit: Augmented Reality Based on Curved Object Surfaces
https://youtu.be/4U4FlavRKa4
Tools used:
Apple iPhone XS Max
Apple ARKit 2.0
Apple Metal API
CurvSurf FindSurface.
The source code is available:
https://github.com/CurvSurf/FindSurface-iOS
The Iterative Closest Point (ICP) point cloud registration algorithm could be the solution. ICP determines the relative 6DoFs of multiple users by comparing the vertices of MeshAnchors.
Given: Width/height ratio of portal. PlaneAnchors of wall, floor and ceiling.
Goal: Attaching the portal to the wall.
Methods:
Taking the ray (6DoF pose) of DeviceAnchor, HandAnchor, or eye tracking by Spatial Tap.
Ray-casting the ray onto the PlaneAnchor of wall.
The initial position of portal is the ray-casting point.
The normal vector of portal is the normal vector of wall.
Moving the center position of portal to the middle point on the wall between the floor and ceiling.
Adjustment of the height of portal to the vertical distance between the floor and ceiling.
Hope this is helpful.
There was a delay in rendering the mesh triangle in the line of sight (ray-casting), so we tried to use the "LowLevelMesh" of visionOS 2.0. The problem is solved. We plan to apply "LowLevelMesh" to all mesh rendering, such as view-triangle, MeshAnchors, and object surface meshes.
Current speed of object surface extraction:
Plane: 400 Hz
Sphere/cylinder/cone/torus: 200 Hz.
The source code of the app is available:
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request