I have the following piece of code:
@State var root = Entity()
var body: some View {
RealityView { content, _ in
do {
let _root = try await Entity(named: "Immersive", in: realityKitContentBundle)
content.add(_root)
// root = _root <-- this doesn't trigger the update closure
Task {
root = _root // <-- this does
}
} catch {
print("Error in RealityView's make: \(error)")
}
} update: { content, attachments in
// NOTE: update not called when root is modififed
// unless root modification is wrapped in Task
print(root)
// the intent is to use root for positioning attachments.
} attachments: {
Text("Preview")
.font(.system(size: 100))
.background(.pink)
.tag("initial_text")
}
} // end body
If I change the root state in the make closure by simply assigning it another entity, the update closure will not be called - print(root) will print two empty entities. Instead if I wrap it in a Task, the update closure would be called: I would see the correct root entity being printed.
Any idea why this is the case?
In general, I'm unsure the order in which the make, update and attachment closures are executed. Is there more guidance on what we should expect the order to be, what should we do typically in each closure, etc?
Post
Replies
Boosts
Views
Activity
Hi,
I'm trying to replicate ground shadow in this video. However, I couldn't get it to work in the simulator.
My scene looks like the following which is rendered as an immersive space:
The rocket object has the grounding shadow component with "cast shadow" set to true:
but I couldn't see any shadow on the plane beneath it.
Things I tried:
using code to add the grounding shadow component, didn't work
re-used the IBL from the helloworld project to get some lighting for the objects. Although the IBL worked, I still couldn't see the shadow
tried adding a DirectionalLight but got an error saying that directional lights are not supported in VisionOS (despite the docs saying the opposite)
A related question on lighting: I can see that the simulator definitely applies some scene lighting to objects. But it doesn't seem to do it perfectly. For example in the above screenshot I placed the objects under a transparent ceiling which is supposed to get a lot of lights. But everything is still quite dark.
I came to the conclusion that managing screens will be very hard in Vision Pro because:
it's up to the users to arrange multiple screens. Things get messy as soon as you open 3 windows imo.
apps can't control the placement of windows (see bottom of https://developer.apple.com/documentation/visionos/creating-your-first-visionos-app)
there isn't an easy way for apps to create "anchored" SwiftUI views because RealityKit doesn't render SwiftUI elements. You'd rely on 3rd party packages like this one (https://github.com/maxxfrazer/RealityUI).
My thoughts are confirmed by this Youtube video: https://youtu.be/GKmXqPG8u-o?t=523
Tell me where I'm wrong. I hope that the first killer app of AVP isn't a screen manager...