Posts

Post not yet marked as solved
14 Replies
3.1k Views
I'm developing an app that requires use of the front-facing (TrueDepth) face tracking capabilities in an ARKit world tracking session configuration. This functionality (using the front and back cameras simultaneously in ARKit) has worked wonderfully since it was introduced with ARKit 3 last summer...on my iPhones with TrueDepth cameras. I received my new iPad Pro 2020 today, fired up my app, and was shocked to discover that ARWorldTrackingConfiguration.supportsUserFaceTracking returns false. This almost cannot be the case as it dramatically defeats the purpose of the device, which is to enable next-generation AR experiences. I've verified this with Apple's own demo app "Combining User Face-Tracking and World Tracking", which doesn't run on the iPad Pro. I have also verified that face tracking does work in the ARFaceTrackingConfiguration.Can anyone explain this or offer a workaround or solution? It seems impossible that this crucial functionality isn't supported on this particular device.
Posted
by jseymour.
Last updated
.
Post not yet marked as solved
3 Replies
2k Views
My Swift ARKit app needs the position and orientation of the face relative to the front-facing camera. If I set ARConfiguration.worldAlignment = .camera all I need to do is call for the faceAnchor.transform, which works perfectly; but my app needs to run in the default worldAlignment = .gravity so it can work properly with virtual content. In this mode I can get faceAnchor.transform and camera.transform, which are both supplied in world coordinates. How can I use those transforms or other data/methods to get the face anchor in camera coordinates? (Specifically, I need a transform that comes out the same as the direct result from worldAlignment = .camera.) I've tried multiplying those together as well as multiplying one by the other's inverse, in all four order combinations, but none of these results works. I think I am missing something basic. Help?!?!
Posted
by jseymour.
Last updated
.