Hi,
I want to control a hand model via hand motion capture.
I know there is a sample project and some articles about Rigging a Model for Motion Capture in ARKit document. BUT The solution is quite encapsulated in BodyTrackedEntity. I can't find appropriate Entity for controlling just a hand model.
By using VNDetectHumanHandPoseRequest provided by Vision framework, I can get hand joint info, but I don't know how to use that info in RealityKit to control a 3d hand model.
Do you know how to do that or do you have any idea on how should it be implemented?
Thanks
Post
Replies
Boosts
Views
Activity
Hi,
Is it possible to implement another ARWorldTrackingConfiguration but with a CustomizedHandAnchor by front camera returned, by which we got a back camera feed and a CustomizedHandAnchor(instead of ARFaceAnchor)
I see there are questions asked about ARKit accessing both camera, which is similar to my use case.
ARKit access both cameras depth
Is it possible to use ARSession to display both the front camera and the rear camera at the same time?
I am not sure if implementing a customized ARWorldTracking(WithHandAnchor)Configuration and CustomizedHandAnchor is right direction to this use case.
Thanks