How to transform Vision body tracking coordinates

I am using the new Vision VNDetectHumanBodyPoseRequest from Apple, but I am not sure why my mapped coordinates are incorrect with the front-facing camera.

My drawing code is correct as I can see the nodes, but they are in the completely wrong position. I tried setting different orientations in the VNImage request handler:
Code Block
VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: .left)

The file is attached.


Thanks for the help!

Accepted Reply

I figured it out! The body needs to be further away from the camera: tracking performs very well for distances greater than 5m apart, even in horizontal movements or obscured joint positions.

Replies

I fixed the skeleton, it's now visible, but there is considerable lag and jerkiness in the movement.
Code Block
captureConnection?.videoOrientation = .portrait
...
let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: .down)
...
var recognizedPoints = recFacePoints.merging(recTorsoPoints) { (current, _) in current }
recognizedPoints = recognizedPoints.merging(recLeftArmPoints) { (current, _) in current }
recognizedPoints = recognizedPoints.merging(recRightArmPoints) { (current, _) in current }
recognizedPoints = recognizedPoints.merging(recLeftLegPoints) { (current, _) in current }
recognizedPoints = recognizedPoints.merging(recRightLegPoints) { (current, _) in current }

were the changes I made
Are there any ways to reduce this jerkiness and improve performance? Thanks in advance.
I figured it out! The body needs to be further away from the camera: tracking performs very well for distances greater than 5m apart, even in horizontal movements or obscured joint positions.