I am current working on my Master Thesis and I have a case in which a user wears a VR Headset that covers most of the upper face (HTC Vive). While the user is wearing the Vive I am trying to track his/her mouth movement, of course the arkit can't recognise the face since it's mostly covered, due to this circumstance it isn't able to track the mouth.
My first workaround I'd try is to put a cut-out half of a face on the HTC so the Arkit might recognise a face. This could work as long as Arkit isn't build upon machine learning considering Facial-Recognition.
Since the first workaround is pretty crude I was thinking about trying to tell Arkit where a face is supposed to be (put some kind of dot on the Vive which is supposed to be recognised as face position). In order to put this plan to the test I would need more information on how the arkit facial recognition works and in what way it is tweakable.
Does anyone have any experience with this case and knows about how arkit recognizes faces and how one can tweak it? (Might there be a some kind of mouth tracking I haven't heard of?)
Any help is welcome.