Safety and privacy must be the top priority with AVP as there are many unknown vulnerabilities.
The 10' x 10' safe area zone would be the area where AVP's motion tracking and environmental mapping would ensure minimum Safety.
Check-out AVP Swift Apps (revealing chances and dangers of beyond 10' x 10' safe area zone, especially in darkness):
FindSurface 1d - Apple Vision Pro
https://youtu.be/_JxFbf6lXZw
FindSurface RR Demo - Apple Vision Pro
https://youtu.be/Mep1w5nvu0Y
The source code of the AVP app is available on GitHub:
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request
Post
Replies
Boosts
Views
Activity
The only way will be to use DepthMap, which is not available on visionOS:
HandAnchor provides the 6DoF pose of the wrist.
The distance between the wrist origin and the DepthMap points along the +Y axis of the wrist is the offset that places the bottom of the watch above the wrist.
6DoF pose of the bottom of the watch = 6DoF pose of the wrist + Y axis offset.
CurvSurf also wants to get access to the 576 laser distance points. CurvSurf's FindSurface framework could determine the shape (plane, sphere, cylinder, cone or ring), size and 6DoF pose of the object in front of iPhone/iPad/Vision Pro in real time at up to 300 Hz from the 576 points, even in total darkness.
It's up to Apple.
We developers use the functionality that Apple visionOS ARKit offers.
YouTube:
https://youtu.be/Mep1w5nvu0Y
GitHub:
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request
Apple Vision Pro
Object planes, spheres, cylinders, cones, and tori can now be effortlessly detected and measured in real-time at up to 120 fps (found-per-second).
Check out the source code of the AVP app on GitHub CurvSurf for more details:
GitHub Link
Once you know the shapes, sizes, and 6DoF poses of the physical object surfaces (e.g. floors, walls, pillars, etc.) around you, a set of permanent stable ObjectAnchors can be established by exchanging and comparing them.
AVP is a spatial computer seamlessly blending digital content with the physical space around you. For spatial computing, the physical space must be defined accurately and speedy. FindSurface determines the shapes, sizes, and 6DoF poses of your physical space from 3D measurement points including MeshAnchors of AVP. Once the shapes, sizes, and 6DoF poses of your physical space are detemined, you can everything, e.g., you can view photos and videos wrapped around your physical surfaces.
FindSurface Real-Time 1 - Apple Vision Pro
https://youtu.be/2aSMBrPTEtg
FindSurface Real-Time Preview - Apple Vision Pro
https://youtu.be/CGjhfKxjpUU
There is no way we developers access to building MeshAnchors. Apple's building MeshAnchors is a complicated system function, not perfect, generating unwanted unexpected MeshAnchors. We have to accept the current and wait till solved. Apple engineers are aware of.
FindSurface Real-Time Preview - Apple Vision Pro
https://youtu.be/CGjhfKxjpUU
FindSurface Real-Time 1 - Apple Vision Pro
https://youtu.be/2aSMBrPTEtg
The source code of the AVP app will be available on GitHub CurvSurf in the week of Sept. 2, 2024.
FindSurface Real-Time 1 - Apple Vision Pro
https://youtu.be/2aSMBrPTEtg
Apple Vision Pro (visionOS ARKit) does not allow you an access to LiDAR sensor data, only MeshAnchor.
The data processing chains of LiDAR sensor are:
576 laser points (Apple internal data. undisclosed)
256x192 depthMap (iPadOS/iOS ARKit), generated from 576 laser points and RGB image
MeshAnchor (iPadOS/iOS/visionOS ARKit), generated from 256x192 depthMap.
Real-time preview for object geometry detection and measurement from MeshAnchors. Object planes, spheres, cylinders, cones, and tori can be detected and measured in real time up to 90 Hz.
The source code of the app will be available on GitHub CurvSurf.
FindSurface Real-Time Preview - Apple Vision Pro
https://youtu.be/CGjhfKxjpUU
The FindSurface demo app for Apple Vision Pro (visionOS) is now stabilized:
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS
Cone2Cylinder, Torus2Cylinder, and Torus2Sphere conversions are implemented. For example, if you try to get a cone from a real cylindrical object (with a diameter of larger than 1 meter), FindSurface will convert the resulting cone to a cylinder.
Cones and tori are trimmed to the boundaries of the inlier points.
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS
Once object measurement points (point cloud, mesh vertices, etc.) are collected, the measurement accuracy and average distance between points are fixed.
CurvSurf FindSurface works based on those values and requires additional parameter values for region growing to enlarge the length/width and increase the radius of the estimated object geometry.
After a series of experiments, CurvSurf found the optimal parameter value set for MeshAnchor generated by Apple Vision Pro:
Accuracy: 1.5 cm,
Average Distance: 10 cm,
Lat. Ext.: 10,
Rad. Exp.: 5,
Touch Radius: 1/4 - 1/2 of object diameter/width [cm].
As an Apple Vision Pro user, you need to adjust the Touch Radius proportional to the approximate diameter or width of your object. It is a problem of object scaling for a range of 1 - 20 meters.
If you like to get a relatively small plane or short cylinder/cone, you set the Lat. Ext. to less than 5.
Voice commands:
“Tap” – Spatial tap (gazing & pinching). Invoke FindSurface.
“Tap plane” – Plane selection.
“Tap sphere” or “Tap ball” – Sphere selection.
“Tap cylinder” – Cylinder selection.
“Tap cone” – Cone selection.
“Tap torus” or “Tap donut” – Torus selection.
“Tap accuracy” or “Tap measurement accuracy” – Accuracy selection.
“Tap mean distance”, “Tap average distance”, or “Tap distance” – Avg. Distance selection.
“Tap touch radius” or “Tap seed radius” – Touch Radius selection.
“Tap Inlier” – “Show inlier points” toggle.
“Tap outline” – “Show geometry outline” toggle.
“Tap clear” – “Clear Scene” click.
A solution would be:
Align the virtual object horizontally and vertically to the real object from your point of view.
Get into the orthogonal position to your line of sight.
Align the virtual object to the real object while letting the virtual object move along your original line of sight.
Repeat if necessary.