Action Classifier code

Is the code for the simple jumping jacks, lunges, squats app available to download?

thanks,
Dale
Thanks for the support. This code is not released. The major logics for action classifier prediction have been demoed in Xcode from the WWDC session, at about 16min30s: https://developer.apple.com/videos/play/wwdc2020/10043/. The rest of the demo app includes standard logics to get the camera stream, set AVCaptureSession, etc. If you are interested in these: check https://developer.apple.com/documentation/avfoundation/avcapturesession.

I had the same question, but for one specific part I didn't know how to build. There's a line in the demo code like this:

Code Block
/// Extracts poses from a frame.
  func processFrame(_ samplebuffer: CMSampleBuffer) throws -> [MLMultiArray] {
    // Perform Vision body pose request
    let framePoses = extractPoses(from: samplebuffer) <---- this
....


I tried to perform the Vision body pose request as follows:

Code Block
  func extractPoses(from sampleBuffer: CMSampleBuffer) -> [MLMultiArray] {
    let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: .down)
    let request = VNDetectHumanBodyPoseRequest(completionHandler: bodyPoseHandler)
    do {
      // Perform the body pose-detection request.
      try requestHandler.perform([request])
    } catch {
      print("Unable to perform the request: \(error).")
    }
  }

How can I return a VNRecognizedPointsObservation given the request is asynchronous? I could use VNDetectHumanBodyPoseRequest() without the completionHandler but then this doesn't guarantee the result will always be available..

Hope you can help me out!
I also wonder how the animated skeleton in the WWDC video is so smooth and accurate - I tried to use the same code as described here: https://developer.apple.com/forums/thread/651683, but the jitter is noticeable and there is lag too. Thanks for the help!
Action Classifier code
 
 
Q