Post

Replies

Boosts

Views

Activity

Are there other ways to train the model? Training the VisionOS tracking model using Create ML on the Mac takes too much time each time.
I am developing with Apple Vision Pro to implement object tracking functionality, but each model needs to go into Create ML for training, and the training time is very long. Are there other ways to shorten training time while obtaining reference files in the same format? Additionally, can the delay in object tracking be further optimized? Although the refresh rate has been optimized, there is still a noticeable delay.
1
0
265
1w
distance calculation:I want to implement the same code functionality for distance calculation in Unity using SwiftUI. visionos
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body. I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this? void CalculateLengthInsideOrgan() { // Direction from the base of the probe to the tip Vector3 direction = probeTip.position - probeBase.position; float probeLength = direction.magnitude; // Raycasting RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask); if (hits.Length > 0) { // Calculate the length entering the organ float distanceToFirstHit = hits[0].distance; lengthInsideOrgan = probeLength - distanceToFirstHit; } else { lengthInsideOrgan = 0f; } }
1
0
254
1w
Regarding real-time object tracking and real-time image recognition
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low. On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS. Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data? We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
1
0
164
1w