I'm running VNDetectHumanBodyPoseRequests to perform skeleton tracking on a VNImageRequestHandler (with empty options). This works fine on 30fps live video from the camera.
However, my application now requires 60fps analysis, but due to the time taken to perform a VNDetectHumanBodyPoseRequest (around 40ms on an iPhone 11) this limits the maximum frame rate to 45fps.
I've tried creating two separate dispatch queues for performing the work and using these on alternate frames, hoping to parallelise the work, but this does not seem to increase throughput.
I've also tried reducing the resolution of the incoming video, but this has not helped either.
Is there anything else I can do which might reduce the time taken to perform a VNDetectHumanBodyPoseRequest? I don't need all the points to be recognised, just one side of the body, however I can't see a way to do this with the existing API, and perhaps it wouldn't help anyway?
Is there any realtime pre-processing I could do to each CMSampleBuffer (for example converting it to grayscale?), which would improve performance?
However, my application now requires 60fps analysis, but due to the time taken to perform a VNDetectHumanBodyPoseRequest (around 40ms on an iPhone 11) this limits the maximum frame rate to 45fps.
I've tried creating two separate dispatch queues for performing the work and using these on alternate frames, hoping to parallelise the work, but this does not seem to increase throughput.
I've also tried reducing the resolution of the incoming video, but this has not helped either.
Is there anything else I can do which might reduce the time taken to perform a VNDetectHumanBodyPoseRequest? I don't need all the points to be recognised, just one side of the body, however I can't see a way to do this with the existing API, and perhaps it wouldn't help anyway?
Is there any realtime pre-processing I could do to each CMSampleBuffer (for example converting it to grayscale?), which would improve performance?