Hello,
I've been able to run the body tracking code example with the skeleton tracking a person's movement. I would like to add People Occlusion to this scenario. The code example depends on the ARBodyTrackingConfiguration subclass of ARConfiguration.
After calling
ARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentation)
or
ARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)
I got the value 'false' for both.
To double check, I have tried to turn on People Occlusion by setting the the frameSemantics of the session
var config = ARBodyTrackingConfiguration()
config.frameSemantics.insert(.personSegmentation)
// or config.frameSemantics.insert(.personSegmentationWithDepth)
But this leads to a run-time exception complaining about the frameSemantics options I've set.
-----
I've seen that the ARWorldTrackingConfiguration supports .personSegmentation and .bodyDetection (according to the .supportsFrameSemantics( ) method), so I tried to achieve body tracking + people occlusion that way. I've noticed these two frameSemantics options cannot be turned on at the same time with an ARWorldTrackingConfiguration (it causes another runtime exception). Despite this, the method .supportsFrameSemantics() return true for both .personSegmentation and .bodyDetection.
If I use the ARWorldTrackingConfiguration and only turn on .bodyDetection frameSemantics, there are no runtime exceptions but the session isn't returning any ARBodyAnchors, as in the original 3D example (see below).
"When ARKit identifies a person in the back camera feed, it calls
session:didAddAnchors:
, passing you an
ARBodyAnchor
you can use to track the body's movement."
Source: https://developer.apple.com/documentation/arkit/arbodytrackingconfiguration
-----------------------------
Am I missing something obvious? Is it possible to somehow do People Occlusion and Body Tracking at the same time?
If I want to achieve body tracking, must I use the ARBodyTrackingConfiguration subclass or is there some other way to turn on the .bodyDetection frameSemantic enum using a different subclass of ARConfiguration?
EDIT: If it is not currently possible, is this something Apple intends to support in the future?