If the shape, size, location, and orientation of the surface of a real object can be accurately estimated by processing vertices of ARMeshAnchor in real time, various AR applications are possible. Accurate real-time calculations ensure that such real objects can serve as moving anchors for virtual objects.
Furthermore, if depth values for 576 points of LiDAR are provided, AR at another level is possible. Even in totally dark environments where the RGB images required for motion tracking cannot be obtained, however 576 points of LiDAR are obtained. By processing LiDAR's 576 such points in real time, you can pinpoint the shape, size, location, and orientation of a real-world object's surface front of you, even in the total darkness. 576 points of LiDAR must have no concern about privacy problem. But bring much positive uses.
Post
Replies
Boosts
Views
Activity
We have developed a software library (FindSurface SDK) that processes 3D measurement points to accurately estimate the shape, size, position and orientation of the workpiece in real time.
The accuracy of the software is the highest according to ISO 10360-6 (accuracy better than 1 micrometer for the length unit and 1 microradian for the angle unit in a volume of 1 m^3), and the operating speed is more than 50 objects/sec. on iPhone 14 Pro (Max) with LiDAR 3D camera.
A series of 3D measurement points can be a point cloud (or a series of vertices in a mesh). Recognizable and measurable shapes are planes, spheres, cylinders, cones and tori.
For the working principle and demo of FindSurface, please search on web and find the following link:
GitHub CurvSurf/FindSurface
YouTube CurvSurf
FindSurface web demo.
FindSurface may not be what you search for.
If you have a question, please contact the email address listed on GitHub CurvSurf/FindSurface.
ARKit in visionOS provides information (ARPlaneAnchor) about the position, orientation and size of a real horizontal or vertical plane.
https://developer.apple.com/documentation/arkit/arplaneanchor
In addition, LiDAR 3D measurement information (ARDepthData) is processed to provide mesh information (ARMeshAnchor).
https://developer.apple.com/documentation/arkit/armeshanchor
So, in theory, app developers can render 2D graphics with desired photos and videos on ARPlaneAnchor and ARMeshAnchor.
In addition, through the analysis of ARDepthData or ARMeshAnchor, various AR applications are possible by accurately determining the shape, size, position and direction of real curved surfaces in real time.
YouTube BmKNmZCiMkw
YouTube 9QkSPkLIfWU
It is logical that scene understanding is missing in the visionOS simulator. Scene understanding requires sensor data about the physical environment. The visionOS simulator without physical sensors cannot have sensor data.
Additionally, it should be noted that AVP includes cameras and LiDAR, but the sensor data is not shared with developers for privacy reasons. What is made available to developers is horizontal and vertical plane information and mesh information (ARMeshAnchor) that ARKit generates by internally processing sensor data.