So, realistically, this platform is infantile. The things that are best supported at the moment are 2D windows in 3D space, and even then Window management is absent.
Apple’s platform is not billed as VR or AR, it is special computing. As such it lacks many of the VR or AR features that other MR platforms have because It has opinionated design decisions with special computing as the focus.
You could explore this from a Unity game-engine angle, or seek out game developers or those familiar with Meta’s VR platforms.
However, starting with development is likely the wrong move. While developers could build a prototype or tell you what is and is not supported, the appropriate starting point is with strategy/UX and product design.
Drown in the human interface guidelines of various VR platforms and cultivate a product strategy and consult with engineers on technical feasibility.
Once you’ve done that you can start prototyping and building the thing.
Post
Replies
Boosts
Views
Activity
Lol yeah. I have this problem too. I hope an Apple Dev files a feedback for it so it gets fixed.
The VisionOS sample code projects are not linked at the bottom of relevant WWDC videos.
I think this is the one that involves interacting with a satellite?
https://developer.apple.com/documentation/visionos/world/
Also filed this. I've about run out of things I can develop and feel I need to begin coding interactions with planes and the persistence of world anchors.
https://feedbackassistant.apple.com/feedback/12639395
Hope the simulator gets this and I don't have to wait several years to continue.
As of now, VisionOS does not appear to support synchronization of ARKit entities.
There are ”SharePlay” WWDC23 videos for sharing Shared-Space and Immersive-Space experiences, so you may want to see if that can solve your use case.
Given Apple’s routine lack of transparency for many things, I don’t think we’ll ever know their criteria. Maybe they just had more applicants for that day that looked “better”?
Who knows. I’m hoping we get world tracking support in the simulator so I don’t need to worry about needing real hardware for a few years.
So, it turns out that models have internal properties that dictate how image based materials apply to them. You set these properties by "UV-unwrapping" the model, and what surface area of the texture maps to what surface areas on the model.
For one reason or another I thought these UV properties were part of the material itself, but that would mean you couldn't use a material on multiple models, and that would be silly.
I fixed up the UV Mapping of my model with Blender's "Project from View (bounds)" tool, and it now looks the way I expect.
I'm looking int this more in Reality Composer pro and I'm seeing these distortions in RCP with the model that's programmatically receiving the texture.
I guess this could be a modeling issue now, as this texture applies fine to the sphere in reality composer pro.
Correct. Apps running in the shared space can display windows and volumes but not immersive scenes.
Only when an app is ”immersive” can it get access to world tracking information required for a RealityKit character to navigate the space.
I would add this in a gesture modifier to the RealityView:
https://developer.apple.com/documentation/swiftui/rotategesture3d
What happens if you change the visibility in a “withAnimation” block?
Widget extensions are not supported on VisionOS, but you can spawn windows and set their size.
Almost like you need to reverse the direction of the mesh triangles. I wonder if there’s a convenient way to do that…
2D iOS/iPadOS/MacOS software developer.
I wish there was a sensible framework for creating and manipulating 3D content built on top of reality kit. Every summer I think we’ll get one and we don’t. It still feels like there’s a piece missing to me.
Same question.