Post

Replies

Boosts

Views

Activity

iOS: NavigationStack.navigationDestination breaks ARKit Session used in RealityKit ARView
The following code is described as doing (this is related to Reality Kit ARView, although I am accessing the underlying ARKit Session as you will see): App opens to a .nonAR ARView FirstARView This will show a red background with a blue box showing a single side of the box face on There will be some red text which says Press the button, when you press this the app uses a NavigationStack.navigationDestination to open a second RealityKit ARView MagicARView Before the MagicARView opens the following occurs to the ARKit session from .nonAR ARView FirstARView Pause Change camera mode to .AR Change the background to be .CameraFeed() Run() Pause() Note on why you have to do the above steps 1 to 5: I had a previous issue when switching between a view with a .nonAR view and an .AR view where the camera feed on the .AR view was just black - the only way to overcome this seems to be running the above steps on the underlying ARKit session before switching to the second .AR view. Now when the second .AR MagicARView opens there will be a large blue box which appears at [0,0,0]. Expected behaviour: you can move around the room (eg: move behind a partial wall) and the box will be occluded by the wall. Actual behaviour: you can move around the room (eg: move behind a partial wall) and the box IS NOT occluded by the wall - occlusion is broken. My assumption: Even through you see the .nonAR ARView FirstARView disappearing, and even through I go to great lengths to set both the arView in the FirstController and even the FirstController itself to nil - something is being held onto incorrectly in the underlying ARKit Session (apparently there is only one per app...) My solution and question: See the second code block for the solution - if you remove the .navigationDestination and use conditionals in the ContentView it appears to resolve the issue. Does anyone understand the inc.navigationDestination and ARKit session and why this is the case? Note you obviously do not even need to pause() and change the camera when doing it this way. import SwiftUI import RealityKit import ARKit let configuration: ARWorldTrackingConfiguration = { let config = ARWorldTrackingConfiguration() if (ARWorldTrackingConfiguration.supportsSceneReconstruction(ARWorldTrackingConfiguration.SceneReconstruction.mesh)) { config.sceneReconstruction = .mesh } config.planeDetection = [.vertical, .horizontal] return config }() class FirstViewController { weak var arView: ARView? } class FirstViewControllerWrapper { var firstController: FirstViewController? init() { firstController = FirstViewController() } } struct ContentView: View { @State var loadSecondView: Bool = false var firstViewControllerWrapper: FirstViewControllerWrapper? init () { firstViewControllerWrapper = FirstViewControllerWrapper() } var body: some View { NavigationStack{ ZStack{ FirstARView(firstViewController: firstViewControllerWrapper!.firstController!) .onDisappear(){ print("First view is disappearing") } Button(action: { firstViewControllerWrapper?.firstController!.arView?.session.pause() firstViewControllerWrapper?.firstController!.arView?.cameraMode = .ar firstViewControllerWrapper?.firstController!.arView?.environment.background = .cameraFeed() firstViewControllerWrapper?.firstController!.arView?.session.run(configuration) firstViewControllerWrapper?.firstController!.arView?.session.pause() firstViewControllerWrapper?.firstController!.arView = nil firstViewControllerWrapper?.firstController = nil loadSecondView = true }) { Text("Press the button").background(.red) } } .navigationDestination(isPresented: $loadSecondView) { MagicARView() } } } } struct MagicARView: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true) arView.environment.sceneUnderstanding.options.insert(.occlusion) let boxMesh = MeshResource.generateBox(size: 0.5) let boxMaterial = SimpleMaterial(color: .blue, isMetallic: false) let model = ModelEntity(mesh: boxMesh, materials: [boxMaterial]) let modelAnchor = AnchorEntity(world: [0.2,0.2,0.2]) modelAnchor.addChild(model) arView.scene.addAnchor(modelAnchor) arView.session.run(configuration) return arView } func updateUIView(_ uiView: ARView, context: Context) { } } struct FirstARView: UIViewRepresentable { weak var firstViewController: FirstViewController? func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero, cameraMode: .nonAR, automaticallyConfigureSession: true) arView.environment.sceneUnderstanding.options.insert(.occlusion) let boxMesh = MeshResource.generateBox(size: 0.5) let boxMaterial = SimpleMaterial(color: .blue, isMetallic: false) let model = ModelEntity(mesh: boxMesh, materials: [boxMaterial]) let modelAnchor = AnchorEntity(world: [0.2,0.2,0.2]) modelAnchor.addChild(model) arView.scene.addAnchor(modelAnchor) arView.session.run(configuration) arView.environment.background = .color(.red) firstViewController!.arView = arView return arView } func updateUIView(_ uiView: ARView, context: Context) { } } Solution: (note this is the only view/thing that changed) struct ContentView: View { @State var loadSecondView: Bool = false var firstViewControllerWrapper: FirstViewControllerWrapper? init () { firstViewControllerWrapper = FirstViewControllerWrapper() } var body: some View { NavigationStack{ ZStack{ if (!loadSecondView) { FirstARView(firstViewController: firstViewControllerWrapper!.firstController!) .onDisappear(){ print("First view is disappearing") } } if (loadSecondView){ MagicARView() } Button(action: { loadSecondView = true }) { Text("Press the button").background(.red) } } } } }
0
0
558
Jan ’24
RealityKit - mixing EntityTranslationGestureRecognizer and UIPanGestureRecognizer
It is possible to have both EntityTranslationGestureRecognizer and UIPanGestureRecognizer on the same entity? Once I install EntityTranslationGestureRecognizer on an entity, the UIPanGestureRecognizer will no longer work on that Model Entity (with Collisions set up). It seems that the usual APIs for managing multiple gestures in view, such as the delegate to determine whether two gesture recognizers should simultaneously recognize a gesture func gestureRecognizer(UIGestureRecognizer, shouldRecognizeSimultaneouslyWith: UIGestureRecognizer) -> Bool does not work with EntityTranslationGestureRecognizer. Thank you.
0
0
392
Jan ’24
iOS 17 Object Capture breaks RealityKit occlusion
I believe that ObjectCaptureView from iOS 17 is causing an issue that breaks RealityKit occlusion. The following code is a minimal reproducible example of the issue, as follows: load app, and notice that yellow box will be occluded by the real environment as you move your phone around press Switch button in center of screen to go to ObjectCaptureView ObjectCaptureView will open (notice the square in middle of screen) press Switch button in center of screen - go back to RealityKit view notice that box is no longer occluded by environment as you move around I cannot understand why this is happening, I do nothing except .Start() and .Cancel() the Object Capture Session, but if you remove both Start() and Cancel() the issue still presents. The issue seems to only occur when you call ObjectCaptureView() somewhere in your code. ContentView App I have logged a bug with Apple but would really like any feedback - thanks.
0
0
513
Jan ’24
Scene Kit Rotation - rotating around X and Y axis only, causing Z rotation
I am trying to control the orientation of a box in Scene Kit (iOS) using gestures. I am using the translation in x and y to update the x and y rotation of the SCNNode. After a long search I have realised that x and y rotation will always lead to z rotation, thanks to this excellent post: [https://gamedev.stackexchange.com/questions/136174/im-rotating-an-object-on-two-axes-so-why-does-it-keep-twisting-around-the-thir?newreg=130c66c673f848a7be2873bf675573a9) So I am trying to get the z rotation causes, and then remove this from my object by applying the inverse quaternion however when I rotate the object 90 deg around x, and then 90 deg around Y it behaves VERY weirdly. It is almost behaving as it is in gimbal lock, but I did not think that using quaternion in the way that I am would cause gimbal lock in this way. I am sure it is something I am missing, or perhaps I am not able to remove the z rotation in this way. Thanks! I have added a video of the strange behaviour here [https://github.com/marcusraty/RotationExample/blob/main/Example.MP4) And the code example is here [https://github.com/marcusraty/RotationExample)
0
0
1k
Dec ’23
iOS 17 ObjectCaptureSession beginNewScanPassAfterFlip
I am trying to get an implementation of Object Capture using ObjectCaptureSession in iOS 17. I have been following the example supplied by Apple, but I cannot get the session object to be in the correct state to allow ObjectCaptureSession::beginNewScanPassAfterFlip() to be called. I get the following error when I call session.beginNewScanPassAfterFlip() Can't beginNewScanPassAfterFlip() from state == capturing Must be .paused from .capturing To start with, there is no state of ObjectCaptureSession which is .paused , so is this talking about .isPaused? I have tried using session.pause() and confirm that it is paused using .isPaused but I get the same error as above. I have checked the output of session.state, and confirm it is .capturing I have put print statements in the example, and confirm that before session.beginNewScanPassAfterFlip() is called at line 104 of OnboardingButtonView the state is .capturing This goes against the documentation in this page: https://developer.apple.com/documentation/realitykit/objectcapturesession/beginnewscanpassafterflip() Note, I have also tried pausing the session and calling beginNewScanPassAfterFlip() but this results in the warning: I am hoping for some clarification if there is something I am missing?
0
0
617
Nov ’23
Save and align ARKit created and rendered geometry later in SceneKit
I am exploring ARKit and SceneKit, but I am not sure if what I want to do is possible. In App1: Run an ARKit session using configuration.sceneReconstruction = .mesh I am rending the mesh in func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) and func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) and the mesh appears correct I have set up a button that does the following: Capture the mesh to an .obj file (based off the excellent answer in here) Capture a snapshot (SCNView snapshot) Store the AR Camera Transform Store the AR SCN View Point Of View Projection Transform Storing the transforms in a CSV, and taking care to ensure I restore them in column major order. In App2: Load the geometry from the .obj file into SCNNode, do not apply any transform to it, apply a wireframe to it so I can visualise it Add a camera and apply the 2 saved transforms (AR Camera Transform and then AR SCN View Point Of View Projection Transform Set the background of the scene to be the image from the snapshot. I was expecting that the mesh, as visualised by the wireframe in App2 would match the rendering as captured in a point in time from App1 - however I cannot get it to match. I have two specific questions: Have I missed something fundamental in my understanding of what is possible? Should this be possible and could I be missing some step? The codebase is large but I have put the basic outline here What is the difference between ScnCamera Projection Transform and AR Camera Project Matrix - they appear different in App1 which is not expected to me. Thanks very much
0
0
538
Oct ’23