I currently use motion capture in an app, and I am intrigued by the new Action Classifiers as a way to detect behaviors as either a signal to start / end something or score the user's performance. I am wondering about how realistic it is to run Vision framework implementing a machine learning model simultaneously with ARKit implementing motion capture.
Performance impact of running Vision w/ML and motion capture
Add a Comment