Updates in the Integration of Motion Tracking into a VisionOS App
Lately I've been looking for ways to perform motion tracking inside my visionOS app, the best approach I've found so far is to use an ARView in my current scanner iOS app in order for me to be able to use the device's integrated LiDar to measure the distance from the camera to the detected device, in order to generate coordinates as well as getting a precise distance metric, I also integrated a CoreML object detection model in order to use its bounding box data to use the coordinate data of the point closest to the center of the bounding box, the found coordinates are then sent to the Vision Pro headset via multi peer connectivity
Vision OS app Adjustments
In order for me to be able to manifest entities at the detected device's position, I created a function to place an entity with a world anchor on the position in which the user performs a tap gesture for the user to place a reference point in the scanning device's position, I may replace this for an image anchor later.
The app will use the placed entity's coordinates as a reference to place a new entity in the received position