anchor points from 2dtexture via vertex shader

I'm trying to cast the positions of points from an image to the location of a SCNNode using SCNNodeRendererDelegate during an ARSession.

I have the node in the scene and I am sending uniforms to the renderer using the arguments from renderNode(_ node: SCNNode, renderer: SCNRenderer, arguments: [String : Any])

let modelTransform = arguments["kModelTransform"] as! SCNMatrix4

        let viewTransform = arguments["kViewTransform"] as! SCNMatrix4

        let modelViewTransform = arguments["kModelViewTransform"] as! SCNMatrix4

        let modelViewProjectionTransform = arguments["kModelViewProjectionTransform"] as! SCNMatrix4

        let projectionTransform = arguments["kProjectionTransform"] as! SCNMatrix4

        let normalsTransform = arguments["kNormalTransform"] as! SCNMatrix4

In the vertex shader, I calculate the normals using intrinsics = session.currentFrame!.camera.intrinsics in

uint2 pos; // specified in pixel coordinates, normalizing?

    pos.y = vertexID / depthTexture.get_width();

    pos.x = vertexID % depthTexture.get_width();

    

    float depthMultiplier = 100.0f;

    float depth = depthTexture.read(pos).x * depthMultiplier;

    

    float xrw = (pos.x - cameraIntrinsics[2][0]) * depth / cameraIntrinsics[0][0];

    float yrw = (pos.y - cameraIntrinsics[2][1]) * depth / cameraIntrinsics[1][1];

    

    float4 xyzw = { xrw, yrw, depth, 1.f };

My goal is to calculate the clip space position for each vertex using the node uniforms. So I've been multiplying the model view projection matrix a number of ways, but almost every time, the points are either skewed on the image plane, or if projected properly, don't adhere to the position of the modelTransform that I pass in (ie. when I raycast out and get a transform, set the node there, then use the node's renderer callback to grab its transform to pass into the vertex shader).

What transform matrices should I multiply by the vertex? I am using the session.currentFrame?.camera to get the camera.viewMatrix(for: .portrait). But should I use the viewTransform matrix from the node instead? I also get the projection matrix from camera.projectionMatrix(for: .portrait, viewportSize: renderer.currentViewport.size, zNear: CGFloat(znear) , zFar: CGFloat(zfar)). But should I use the projectionTransform from the node, or what about the modelViewProjectionTransform from the node? Could I just multiply nodeUniforms.modelViewProjectionTransform * xyzw in the shader?

If you need more clarification about what I am trying to do, let me know!

Thanks

anchor points from 2dtexture via vertex shader
 
 
Q