Post

Replies

Boosts

Views

Activity

Reply to Can't center entity on AnchorEntity(.plane)
I felt like this should be obvious because it's such an important use case for dropping RealityKit scenes into the world without user interaction, but I tried a few things with translation that failed. For some reason what worked for me was calling box.setPosition.... relative to nil. I'm not sure why it works given that the documentation says "nil" means "world space", when it appears to behave as if nil means "parent space" in this case? class Model: ObservableObject { var wall: AnchorEntity? var child: ModelEntity? } struct ImmersiveView: View { @StateObject var model = Model() var body: some View { RealityView { content in let wall = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 1.5]), trackingMode: .continuous) model.wall = wall let mesh = MeshResource.generateBox(size: 0.3) let box = ModelEntity(mesh: mesh, materials: [SimpleMaterial(color: .green, isMetallic: false)]) model.child = box wall.addChild(box, preservingWorldTransform: false) content.add(wall) box.setPosition([0, 0, 0], relativeTo: wall) } update: { content in if let box = model.child, let wall = model.wall { // box.setPosition([0, 0, 0], relativeTo: wall) // <---- DOES NOT WORK box.setPosition([0, 0, 0], relativeTo: nil) // <---- DOES WORK even though nil means "world space"???? } } } }
Sep ’23
Reply to Dragging coordinates issue in VisionOS
You're probably going through a moment of "What in the world? That wasn't mentioned anywhere!". And yeah, a lot of the demonstrations use an AnchorEntity of type "Plane" to insert a RealityComposer scene into the RealityView at a spot in the world that meets the size criteria, or just do "Content.addEntity" when the realityView loads for an Immersive Scene. It's important to note these are not "world tracked" entities, and will not enable accurate location3D values for interactions. We can use "rotate" and "magnify" in these situations as those gesture change relative to their initial value. Tap gestures can even be used as a sort of tap-boolean, but the location of the tap is not reliable. You're also probably asking "How can I make anything interactive enough to feel immersive this way???" And yeah, I don't have a clue. Maybe if we pay $4000 or get lucky with a developer kit we can figure it out. We can't ask anyone with a developer kit because they're banned from telling us.
Sep ’23
Reply to Is it possible to place content on the plane detected in the visionOS simulator?
I am not seeing any changes in VisionOS Beta3 with respect to placing content where a user taps on a plane. PlaneDetectors are still not available World Tracking Provider still doesn't return after asking it to track an anchor The Scene Reconstruction provider is still missing. Taps on a scene AnchorEntity of type plane still report inaccurate positions Taps on a plane placed at a world AnchorEntity still report inaccurate positions Feedbacks filed with no response: FB13034747 FB13034803 FB12952565 FB12639395 Maybe beta 4?
Aug ’23
Reply to RealityView attachments do not show up in Vision Pro simulator
Attachments definitely work. You "nominate" an attachment SwiftUI like so: attachments: { Text("hello") .glassBackgroundEffect() .tag("panel") // <---------- NOTE THE TAG } This closure can return different results as the state of your scene changes. So if you want an attachment to disappear, just stop returning it from here. After an attachment is nominated, it needs to be added to the scene in the update method of RealityView. First see if RealityKit has synthesized an entity for the attachment you provided: update: { content, attachments in let panelEntity = attachments.entity(for: "panel") // <------- NOTE THAT IT MATCHES THE NOMINATED TAG NAME. // [...] } Once you have that entity you can transform it, plop add it onto another entity, or straight into the content view itself. content.add(panelEntity)
Aug ’23