Much of this question is adapted from the idea of building a SCNGeometry from an ARMeshGeometry, as indicated in this very helpful post by @gchiste.
In my app, I am creating a SCNScene with my scanned ARMeshGeometry built as SCNGeometry, and would like to apply a "texture" to the scene, replicating what the camera saw as each mesh was built. The end goal is to create a 3D model somewhat representative of the scanned environment.
My understanding of texturing (and UV maps) is quite limited, but my general thought is that I would need to create texture coordinates for each mesh, then sample the ARFrame's capturedImage to apply to the mesh.
Is there any particular documentation or general guidance one might be able to provide to create such an output?
In my app, I am creating a SCNScene with my scanned ARMeshGeometry built as SCNGeometry, and would like to apply a "texture" to the scene, replicating what the camera saw as each mesh was built. The end goal is to create a 3D model somewhat representative of the scanned environment.
My understanding of texturing (and UV maps) is quite limited, but my general thought is that I would need to create texture coordinates for each mesh, then sample the ARFrame's capturedImage to apply to the mesh.
Is there any particular documentation or general guidance one might be able to provide to create such an output?