I’m embarking on a new project that will involve animating 3D faces & mouths. I’m looking at using ARFaceAnchors and blendShapes to capture data that will be used to animate the models’ facial expressions.
I have a few basic questions:
(1) As far as I can tell, Apple has not supported exporting Memojis to rigged 3D models. Is this still the case?
(2) I did find one web site that said Apple’s AvatarKit is now public, but everywhere else I’ve checked, it is still a private framework (and Xcode complains). Is AvatarKit still private?
(3) It looks like all 52 blendShapes for an ARFaceAnchor are updated every frame, which updates 60 times a second This is 3120 data points per second. Are there any best practice guides to reduce the data? For example, “These 10 blendShapes capture the most important features for animating a face.”
(4) It appears that visionOS does not support ARFaceAnchor. If I want to present a remote user as a Memoji (or other rigged model) in a shared experience, is there any way to do that at the current time?