I have run the face tracking app and I am very interested in it. There are some questions I do not understanding, and I hope someone could help me to answer. In the face tracking with ARKit, we can get more than 50 unique blendshape coefficients, how are these coefficients calculated? Does it automatically split the face into multiple blendshape targets and how to divide these targets? And another question is, if I want to use iphone X to drive expressions of 3D model, like animoji, do I need to make blendshapes for the 3D model in advance?
How are the blendshape coefficients obtained in face tracking with ARKit
Add a Comment