Posts

Post not yet marked as solved
2 Replies
1.5k Views
Hi all,In Core Motion framework, it provides not only the raw values recorded by the hardware (IMU) but also a processed version of those values.The Apple documentation said that "Processed values do not include forms of bias that might adversely affect how you use that data."What kinds of sensor fusion algorithms are applied internally to obtain the processed (unbiased) acceleration and gyro values?Is there any documentation explaining these kinds of sensor fusion algorithms such as EKF, complementary filter?Thanks,Pyojin Kim.
Posted Last updated
.
Post marked as solved
2 Replies
3.6k Views
Hi All,I want to know the exact definition of the world and camera coordinate systems used in ARKit positional tracking.Of course, I already read this article "Understanding World Tracking" in ARKit documentation.In this article, it is written as "In all AR experiences, ARKit uses world and camera coordinate systems following a right-handed convention: the y-axis points upward, and (when relevant) the z-axis points toward the viewer and the x-axis points toward the viewer's right."But, when I obtain and visualize the ARKit 4x4 transformation matrix with "let T_gc = frame.camera.transform",the camera coordinate attached to the iPhone Xs looks very different from Apple's explanations.Here is my guess of the world and camera coordinate systems used in ARKit positional tracking.https://github.com/PyojinKim/ARKit-Data-Logger/blob/master/screenshot.pngCan somebody tell me this is the right or wrong guess? Am I correct or wrong?Also, Could you guys let me know the exact documentation explaining this coordinate system used in ARKit?Thanks.
Posted Last updated
.