Posts

Post not yet marked as solved
1 Replies
AFAIK A stereoscopic material shows a separate image to the left and right eye. There's not really a depth buffer involved. Perhaps you can conditionally swap between using the stereoscopic image data or left-eye-only image data based on some conditions in the shader graph.
Post not yet marked as solved
1 Replies
For loops are not a valid means of constructing a repeated sequence of views. You'll want to use the SwiftUI view named "ForEach". https://developer.apple.com/documentation/swiftui/foreach
Post marked as solved
3 Replies
You can deploy and debug wirelessly. The dongle is available for developers of applications with large assets, unstable WiFi, or corporate network policies.
Post not yet marked as solved
2 Replies
You can anchor to the head targeted AnchorEntity, which tracks the user’s head. However some users, for accessibility reasons, rely on head movements to interact with items. You should provide an alternative when those flags are true.
Post marked as Apple Recommended
@eskimo This is likely to remain a more annoying problem on Vision Pro where binary sizes could be larger and there is no hardware connection on the consumer device. What's the right feedback categorization for requesting a purchasable connector?
Post marked as solved
4 Replies
I would also like the option to purchase a cable-connector for Vision Pro so I can avoid problems associated with wireless debugging.
Post not yet marked as solved
1 Replies
I believe the “update” method of your RealityView will fire when your entity appears in the scene. You could check the .scene property of your entity at that time and observe it is non-nil.
Post not yet marked as solved
1 Replies
Vision OS does not expose the location a user is looking via the API.
Post marked as solved
7 Replies
Yeah, I get the same behavior. There are a few posts here of others finding the same behavior too. I think you can only get location information relative to the AnchorEntity’s origin from that entity and its children. In addition, AnchorEntities and their children cannot behave in physics based interactions outside of that anchors hierarchy and that includes ray casting. I’ve been endeavoring to never use an AnchorEntoty because of these limitations. Thankfully the plane provider that works on device doesn’t require those entities or else I would be sunk.
Post not yet marked as solved
2 Replies
Same problem. Would love to know the right way to implement this pattern with @observable
Post not yet marked as solved
1 Replies
All I know is that you won’t be able to use “hover” states to detect if the users eyes are actually resting on ad content. Other than that I have not heard of any limitations. I‘m looking forward to apps that can put black squares over billboards in real life. :D
Post not yet marked as solved
3 Replies
I believe you can generate a collision Shape from a mesh. Thats the recommendation for creating entities when using a SceneReconstruction provider, for example. See the raycasting portion ARKit and spatial computing WWDC video for slightly more.
Post not yet marked as solved
2 Replies
As VisionOS is not yet a "Released" OS, it is not available in the non-beta versions of Xcode. Like the other poster said, you need to open up a beta version like the release notes say.
Post not yet marked as solved
1 Replies
Correct. You can place your own plane, give it a collision shape, and use that to test for the time being.
Post not yet marked as solved
2 Replies
One more: What if the user is holding something for away and they rotate their torso away from it? Will it follow along or stay static?