Dear all,
I was exploring the example vision os project "Hello World" below:
https://developer.apple.com/documentation/visionos/world
And there is some strange behavior that looks like some bug to me. One is in the Orbit Module View, the .dragRotation closure only works for the "Satellite" and "Telescope" tab, but not the "Moon" tab. And also the moon model somehow sticks out in the z depth direction which blocks the view to the tabs. Snapshot below:
figured out, it was the .pi constant that is causing the issue, when replacing
I finally figured out that it was the ".pi" constant in OrbitModule.swift that was causing the issue, and when I change it to some literal floating number (even change it to the .pi equivalent value of 3.14159), the .dragRotation closure now starts to work for the "Moon" tab and the moon model is also back to normal z depth as well:
Any idea is this a known issue with certain xcode or vision pro simulator version?
I was using xcode 15.0 beta 8 and visionOS 1.0 simulator
Post
Replies
Boosts
Views
Activity
Hi, all,
I'm exploring some visionOS project. And let's say I'd like to detect a vertical plane and paste a poster using sample code below:
let wallAnchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: SIMD2<Float>(0.6, 0.6)))
My understanding is that the plane/wall detection will always start from the center of the user's view. So I want to keep updating the wallAnchor until user is satisfied with where the poster locates. And I'm doing simply something like below:
wallAnchor.reanchor(.plane(.vertical, classification: .wall, minimumBounds: SIMD2<Float>(0.6, 0.6)))
Ideally I want to keep doing above in every frame so the poster can update continuously. But to begin with, I'm just doing it every 1 second.
However, the issue is that reanchor doesn't seem to work properly. For one, it only successfully detects a plane/wall like every 3 seconds (even without moving the camera view), so the poster only shows up 1/3 of the time; second, after some slight move then keep steady, the poster starts to keep shifting and shrinking even without moving the camera view.
It seems like the wallAnchor only works as expected for the first time when initialized, but not with the reanchor function
Could you please let me know what am I doing wrong here? And what is the proper way to update the anchorEntity in a smooth way? Thanks a lot!
Hi, all, so we know that Vision Pro can track hand using facedown camera. And I'm wondering does ARKit also report hand tracking data from facedown camera?
From my testing it looks like ARKit only reports hand tracking data from front view cameras, but not facedown cameras. Is this expected? If so, I assume facedown camera hand tracking data is only available to Vision OS but not to other apps through ARKit?
Dear Community,
I'm wondering how one can generate a vision code? And how to work with the vision code to either bring users to some endpoints or directly show some AR contents. Is there a tutorial/example to start with?
If it is not a public feature yet, what is the best current alternatives for scanning some code for actions/operations on Vision OS?
Thanks,
Dear community,
So I was trying to use the tracked image feature to track a marker image and spawn some AR content. And when the marker image moves, the spawned content is expected to move together continously/smoothly.
This works perfectly on iOS devices, but same setup doesn't seem to work on Vision Pro. The spawned object doesn't move continuously, and only updates to a new position evey few seconds .. is this expected? Or is there any specific settings needed for Vision OS?
Thanks a lot!
Dear developers, now that we have played with Vision Pro for 3 months, I am wondering why some features are missing on Vision Pro, especially some seem to be very basic/fundamental. So I would like to see if you know more about the reasons or correct me if I'm wrong! You are also welcome to share features that you think is fundamental, but missing on Vision Pro.
My list goes below:
(1) GPS/Compass: cost? heat? battery?
(2) Moving image tracking: surrounding environment processing is already too computing intensive?
(3) 3D object tracking: looks like only supported on iOS and iPadOS, but not visionOS
(4) Does not invoke application focus/pause callback: maybe I'm wrong? But we were not able to detect if an app has been put on background or brought to foreground to invoke a callback