I am exploring ARKit and SceneKit, but I am not sure if what I want to do is possible.
In App1:
- Run an ARKit session using configuration.sceneReconstruction = .mesh
- I am rending the mesh in func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) and func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) and the mesh appears correct
- I have set up a button that does the following:
- Capture the mesh to an .obj file (based off the excellent answer in here)
- Capture a snapshot (SCNView snapshot)
- Store the AR Camera Transform
- Store the AR SCN View Point Of View Projection Transform
Storing the transforms in a CSV, and taking care to ensure I restore them in column major order.
In App2:
- Load the geometry from the .obj file into SCNNode, do not apply any transform to it, apply a wireframe to it so I can visualise it
- Add a camera and apply the 2 saved transforms (AR Camera Transform and then AR SCN View Point Of View Projection Transform Set the background of the scene to be the image from the snapshot.
I was expecting that the mesh, as visualised by the wireframe in App2 would match the rendering as captured in a point in time from App1 - however I cannot get it to match.
I have two specific questions:
- Have I missed something fundamental in my understanding of what is possible? Should this be possible and could I be missing some step? The codebase is large but I have put the basic outline here
- What is the difference between ScnCamera Projection Transform and AR Camera Project Matrix - they appear different in App1 which is not expected to me.
Thanks very much