How are the blendshape coefficients obtained in face tracking with ARKit

I have run the face tracking app and I am very interested in it. There are some questions I do not understanding, and I hope someone could help me to answer. In the face tracking with ARKit, we can get more than 50 unique blendshape coefficients, how are these coefficients calculated? Does it automatically split the face into multiple blendshape targets and how to divide these targets? And another question is, if I want to use iphone X to drive expressions of 3D model, like animoji, do I need to make blendshapes for the 3D model in advance?

Replies

The coefficients, around 51 normalized value is something that Apple given for free in the code. If I am not wrong some kind of FaceShift technology that simply scan user face and match Face AR generic face and wrap around and weight it to match and follow the user face, and thus producing the blendshapes value in realtime at 60 fps.


3D Artists and developer can then use the value to further match and tweak their 3D head mesh creation, can be for creature, etc. Or even using the blendshapes coefficient for other purposes.


I am hoping in WWDC 2018, Apple unveals more exciting tracker, like maybe virtual 3D hands, etc.