Hi All,
I want to know the exact definition of the world and camera coordinate systems used in ARKit positional tracking.
Of course, I already read this article "Understanding World Tracking" in ARKit documentation.
In this article, it is written as "In all AR experiences, ARKit uses world and camera coordinate systems following a right-handed convention: the y-axis points upward, and (when relevant) the z-axis points toward the viewer and the x-axis points toward the viewer's right."
But, when I obtain and visualize the ARKit 4x4 transformation matrix with "let T_gc = frame.camera.transform",
the camera coordinate attached to the iPhone Xs looks very different from Apple's explanations.
Here is my guess of the world and camera coordinate systems used in ARKit positional tracking.
https://github.com/PyojinKim/ARKit-Data-Logger/blob/master/screenshot.png
Can somebody tell me this is the right or wrong guess? Am I correct or wrong?
Also, Could you guys let me know the exact documentation explaining this coordinate system used in ARKit?
Thanks.