Post

Replies

Boosts

Views

Activity

GeometryReader3D doesn't work
Hi, I'm trying to place my 3D content relative to my SwiftUI window. And I find the GeometryReader3D doesn't work as expected. Or maybe I don't fully understand it? When tapping the 3D models in the RealityKit scene, they should be expected to align with the center of the SwiftUI window, not the immersive space origin(0, 0, 0). I further found out the transform of GeometryProxy3D.transform is always identity matrix, no matter in which space. I also noticed in the example and doc, it should come with a ZStack, and I tried it as well, still failed. Could some clarify it a little bit? Below is a brief layout of my app: WindowGroup { ContentView() } ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() } } var body: some View { GeometryReader3D { proxy in ZStack { RealityView { content in if let scene = try? await Entity(named: "ImmersiveScene", in: realityKitContentBundle) { content.add(scene) } } .gesture(TapGesture().targetedToAnyEntity().onEnded({ value in if let trans = proxy.transform(in: .immersiveSpace)?.translation { value.entity.position = SIMD3<Float>(trans.vector) } })) } } } }
1
0
989
Jul ’23
How to display stereo images in Apple Vision Pro?
Hi community, I have a pair of stereo images, one for each eye. How should I render it on visionOS? I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images. I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
8
0
3.7k
Jul ’23