After a little more checking I see that FrameSemantics is only about 2D information related to the frame. So probably what I want is to create an ARBodyTrackingConfiguration and then add the .personSegmentationWithDepth semantic.
However, this appears to be unsupported. Can anyone confirm or provide more info on this?
`I'd like to know more as well.
I'm still in early prototyping/learning mode, and I wanted to use both .personSegmentationWithDepth & .bodyDetection.
This doesn't work since the configuration types are different:
I think you know this, but FYI the PeopleOcclusion demo on the Apple developer site uses .personSegmentationWithDepth.
I'd like to know more as well.
ARBodyTrackingConfiguration and .personSegmentationWithDepth seem like natural complements in that the combination would enable lots of interesting interactions – but I am also seeing that ARBodyTrackingConfiguration (presently) does not support neither .personSegmentationWithDepth nor .personSegmentation.
Is this expected behavior? Any chance this will change in an upcoming beta?
Seems like .bodyDetection and .personSegmentationWithDepth still cannot be used in conjunction. I am using Xcode 11 beta 5 and iOS public beta 4. If anyone has any updates on this that would be awesome to know about!
I am also very interested in this.
The current Unity ARFoundation implementation does not allow for this either and it would be great to get an official statement.
(the ui in the unity arkit sdk also allows to enable pose + segmentation which gives me hopes...)