Body Detection & Person Segmentation Simultaneously?

I've been checking out the ARKit 3 beta examples and am specifically interested in tracking a person's body while ALSO using person segmentation. (Use case: a person moves among virtual objects, and those objects react to approximated collisions with the user's body. Body Detection used to generate approximate collision for the user's body, and Person Segmentation used to enforce the sense of the user moving between objects.)


However, my attempts so far to create an AR configuration which allows this have not been successful. Here is just creating a frame semantics object with the desired features:


  var semantics = ARConfiguration.FrameSemantics()
        
  semantics.insert(.bodyDetection)
  semantics.insert(.personSegmentationWithDepth)
  guard ARWorldTrackingConfiguration.supportsFrameSemantics(semantics) else {
       fatalError("Desired semantics not available.")
  }


This example throws the indicated error. But if either body detection OR person segmentation is left out, it does not.


Similarly, this returns false:

ARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)


So:

  • Am I just doing this wrong? (v possible as I'm new to Swift and native iOS in general — usually use Unity & C#)
  • Or, is this simply not supported? And if so, is this likely to change by time of release?
  • Is there a document somehwere which indicates clearly which ARKit features can be used in conjunction with one another? (I could not find one.)

Replies

After a little more checking I see that FrameSemantics is only about 2D information related to the frame. So probably what I want is to create an ARBodyTrackingConfiguration and then add the .personSegmentationWithDepth semantic.


However, this appears to be unsupported. Can anyone confirm or provide more info on this?

`I'd like to know more as well.


I'm still in early prototyping/learning mode, and I wanted to use both .personSegmentationWithDepth & .bodyDetection.


This doesn't work since the configuration types are different:


ARWorldTrackingConfiguration

vs

ARBodyTrackingConfiguration


I think you know this, but FYI the PeopleOcclusion demo on the Apple developer site uses .personSegmentationWithDepth.

I'd like to know more as well.


ARBodyTrackingConfiguration and .personSegmentationWithDepth seem like natural complements in that the combination would enable lots of interesting interactions – but I am also seeing that ARBodyTrackingConfiguration (presently) does not support neither .personSegmentationWithDepth nor .personSegmentation.


Is this expected behavior? Any chance this will change in an upcoming beta?

Seems like .bodyDetection and .personSegmentationWithDepth still cannot be used in conjunction. I am using Xcode 11 beta 5 and iOS public beta 4. If anyone has any updates on this that would be awesome to know about!

I am also very interested in this.


The current Unity ARFoundation implementation does not allow for this either and it would be great to get an official statement.


(the ui in the unity arkit sdk also allows to enable pose + segmentation which gives me hopes...)