@DTS Engineer Please also file a feedback. I don't get paid to do so, and my batting average for "suggestion" feedbacks is dismal.
13812072
Post
Replies
Boosts
Views
Activity
AFAIK A stereoscopic material shows a separate image to the left and right eye. There's not really a depth buffer involved.
Perhaps you can conditionally swap between using the stereoscopic image data or left-eye-only image data based on some conditions in the shader graph.
For loops are not a valid means of constructing a repeated sequence of views. You'll want to use the SwiftUI view named "ForEach".
https://developer.apple.com/documentation/swiftui/foreach
You can deploy and debug wirelessly. The dongle is available for developers of applications with large assets, unstable WiFi, or corporate network policies.
You can anchor to the head targeted AnchorEntity, which tracks the user’s head.
However some users, for accessibility reasons, rely on head movements to interact with items. You should provide an alternative when those flags are true.
@eskimo
This is likely to remain a more annoying problem on Vision Pro where binary sizes could be larger and there is no hardware connection on the consumer device. What's the right feedback categorization for requesting a purchasable connector?
I would also like the option to purchase a cable-connector for Vision Pro so I can avoid problems associated with wireless debugging.
I believe the “update” method of your RealityView will fire when your entity appears in the scene.
You could check the .scene property of your entity at that time and observe it is non-nil.
I could be wrong, but if something is positioned between the limb and the headset, I don’t think it should be occluded. I did not hear back about debits, though, so I don’t know for sure.
Vision OS does not expose the location a user is looking via the API.
Yeah, I get the same behavior. There are a few posts here of others finding the same behavior too.
I think you can only get location information relative to the AnchorEntity’s origin from that entity and its children.
In addition, AnchorEntities and their children cannot behave in physics based interactions outside of that anchors hierarchy and that includes ray casting.
I’ve been endeavoring to never use an AnchorEntoty because of these limitations. Thankfully the plane provider that works on device doesn’t require those entities or else I would be sunk.
Same problem. Would love to know the right way to implement this pattern with @observable
I think searching with skybox may help you find your answer, but I’ve never personally done this.
All I know is that you won’t be able to use “hover” states to detect if the users eyes are actually resting on ad content.
Other than that I have not heard of any limitations.
I‘m looking forward to apps that can put black squares over billboards in real life. :D
I believe you can generate a collision Shape from a mesh. Thats the recommendation for creating entities when using a SceneReconstruction provider, for example.
See the raycasting portion ARKit and spatial computing WWDC video for slightly more.