Not possible per Apple reply here: Link
Post
Replies
Boosts
Views
Activity
I felt like this should be obvious because it's such an important use case for dropping RealityKit scenes into the world without user interaction, but I tried a few things with translation that failed.
For some reason what worked for me was calling box.setPosition.... relative to nil. I'm not sure why it works given that the documentation says "nil" means "world space", when it appears to behave as if nil means "parent space" in this case?
class Model: ObservableObject {
var wall: AnchorEntity?
var child: ModelEntity?
}
struct ImmersiveView: View {
@StateObject var model = Model()
var body: some View {
RealityView { content in
let wall = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 1.5]), trackingMode: .continuous)
model.wall = wall
let mesh = MeshResource.generateBox(size: 0.3)
let box = ModelEntity(mesh: mesh, materials: [SimpleMaterial(color: .green, isMetallic: false)])
model.child = box
wall.addChild(box, preservingWorldTransform: false)
content.add(wall)
box.setPosition([0, 0, 0], relativeTo: wall)
} update: { content in
if let box = model.child, let wall = model.wall {
// box.setPosition([0, 0, 0], relativeTo: wall) // <---- DOES NOT WORK
box.setPosition([0, 0, 0], relativeTo: nil) // <---- DOES WORK even though nil means "world space"????
}
}
}
}
You're probably going through a moment of "What in the world? That wasn't mentioned anywhere!".
And yeah, a lot of the demonstrations use an AnchorEntity of type "Plane" to insert a RealityComposer scene into the RealityView at a spot in the world that meets the size criteria, or just do "Content.addEntity" when the realityView loads for an Immersive Scene.
It's important to note these are not "world tracked" entities, and will not enable accurate location3D values for interactions.
We can use "rotate" and "magnify" in these situations as those gesture change relative to their initial value. Tap gestures can even be used as a sort of tap-boolean, but the location of the tap is not reliable.
You're also probably asking "How can I make anything interactive enough to feel immersive this way???" And yeah, I don't have a clue.
Maybe if we pay $4000 or get lucky with a developer kit we can figure it out. We can't ask anyone with a developer kit because they're banned from telling us.
Please file a feedback on this to increase pressure to get it added. I've done so on an adjacent issue related to non-"world targeting" entities.
The lack of the PlaneDetectionProvider and SceneReconstructionProvider support in the simulator is felt more and more as we run into these issues.
I am not seeing any changes in VisionOS Beta3 with respect to placing content where a user taps on a plane.
PlaneDetectors are still not available
World Tracking Provider still doesn't return after asking it to track an anchor
The Scene Reconstruction provider is still missing.
Taps on a scene AnchorEntity of type plane still report inaccurate positions
Taps on a plane placed at a world AnchorEntity still report inaccurate positions
Feedbacks filed with no response:
FB13034747
FB13034803
FB12952565
FB12639395
Maybe beta 4?
Here are some related topics and posts that I've made trying to get this to work as well:
https://developer.apple.com/forums/thread/735900
https://developer.apple.com/forums/thread/735558
https://developer.apple.com/forums/thread/735537
https://developer.apple.com/forums/thread/735305
It is not currently possible in the simulator. Fingers crossed for beta 3.
Convert the 2D magnify gesture to a 3D one by modifying it with a "target entity" modifier.
There are few. The following option enables the gesture on all entities in the scene.
MagnifyGesture().targetedToAnyEntity()
Try adding a CollisionEntity and an InputTargetComponent to your entity
let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)])
interactionEntity.components.set(collisionComponent)
interactionEntity.components.set(InputTargetComponent())
Attachments definitely work.
You "nominate" an attachment SwiftUI like so:
attachments: {
Text("hello")
.glassBackgroundEffect()
.tag("panel") // <---------- NOTE THE TAG
}
This closure can return different results as the state of your scene changes. So if you want an attachment to disappear, just stop returning it from here.
After an attachment is nominated, it needs to be added to the scene in the update method of RealityView.
First see if RealityKit has synthesized an entity for the attachment you provided:
update: { content, attachments in
let panelEntity = attachments.entity(for: "panel") // <------- NOTE THAT IT MATCHES THE NOMINATED TAG NAME.
// [...]
}
Once you have that entity you can transform it, plop add it onto another entity, or straight into the content view itself.
content.add(panelEntity)
Looks like an Appleinsider writer got access to someone’s DevKit so I guess they went out.
As far as I know, there is a Unity closed beta at the moment.
My guess is that applications built with flutter and Xamarin will be accepted provided they function well.
As usual, I don’t expect any communication from Apple until they reject apps during app review.
No Response as well.
Have you added Vision as a run destination of your App's product? This is done automatically for new projects, but you'll need to add "Apple Vision - Designed for iPad" if you want to run your app in compatibility mode on a preexisting codebase.