What are the settings that can be adjusted for Apple Vision framework for Hand gestures?

I can run the given example by Apple (https://developer.apple.com/documentation/vision/vndetecthumanhandposerequest). However, I am not sure what are the options to fine tune its behaviour? I know you can control by the confidence level, is there other ways to control the points to make the detection more consistent?

What are the settings that can be adjusted for Apple Vision framework for Hand gestures?
 
 
Q