Activity Classification with front facing camera

Hi, does the functionality within Build an Action Classifier with Create ML work with a font facing camera? I recall limitations for ARKit body tracking only working with the rear facing camera. I am wondering if this is required for Vision.
In the video from this session at 19:34, she is using the front facing camera, so it looks like it should work in that configuration.
You're in luck ! One of the great things about Vision is that it has no limitations on what camera you use in conjunction with a task like Action Classification. Your live camera feed can be from rear or front facing in iOS, built-in or external camera on macOS, or even not from a camera at all as Vision can work equally well with offline video. Since the action classifier just needs the pose data (as an MLMultiArray) it has no specific dependencies on where the video came from, which means you have no end of possibilities here.
You can find out the differences between ARKit's body pose estimation function and the Vision Pose estimator in the "Detect Body and Hand Pose with Vision" presentation.

https://developer.apple.com/wwdc20/10653

Fast forward to 17:15 where they cover this topic, and mention that while ARKit supports the Rear-facing camera only, Vision API can do either - or even process video frames from other sources.
Yeah! The body pose from Vision can be used on both front and rear cameras. You may need to pay a bit attention about the orientation when switching between front and back cameras, depending on your use cases.
The fitness demo in "Build an Action Classifier with Create ML" is actually on front camera.
Your question about differences between ARKit body pose and Vision body pose is almost answered by this WWDC session, "Detect Body and Hand Pose with Vision", (check the slide at 17min 48s) https://developer.apple.com/videos/play/wwdc2020/10653/


Activity Classification with front facing camera
 
 
Q