Hi there!
I have accomplished rendering the point cloud via a metal texture with depth. I plugged in gestures to manipulate the object over the top of the camera feed. I am able to investigate up close it like any volumetric point cloud. However, I am trying to anchor it to an ARAnchor so that I can move around my physical space and investigate the stationary cloud. I have an ARSession running, as well as a custom Renderer that handles Metal. I think it comes down to getting the final view matrix, which is then set into the renderEncoder
renderEncoder.setVertexBytes(&finalViewMatrix, length: MemoryLayout.size(ofValue: finalViewMatrix), index: 0)
I believe I can solve the anchor issue by doing the correct matrix math. To do this, I guess that I need the world matrix that the anchor is in. Then also the local model matrix of the anchor I would use to multiply with the model matrix of the point cloud, thus parenting it. Then I could multiply the projection matrix and view matrix with the model matrix.
Does this sound like a sound way to go about the issue? I have already tried many methods and haven't quite achieved it, especially when moving the physical device forwards and backwards -- it moves with the device.
Thank you!