I've been checking out the ARKit 3 beta examples and am specifically interested in tracking a person's body while ALSO using person segmentation. (Use case: a person moves among virtual objects, and those objects react to approximated collisions with the user's body. Body Detection used to generate approximate collision for the user's body, and Person Segmentation used to enforce the sense of the user moving between objects.)
However, my attempts so far to create an AR configuration which allows this have not been successful. Here is just creating a frame semantics object with the desired features:
var semantics = ARConfiguration.FrameSemantics()
semantics.insert(.bodyDetection)
semantics.insert(.personSegmentationWithDepth)
guard ARWorldTrackingConfiguration.supportsFrameSemantics(semantics) else {
fatalError("Desired semantics not available.")
}
This example throws the indicated error. But if either body detection OR person segmentation is left out, it does not.
Similarly, this returns false:
ARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)
So:
- Am I just doing this wrong? (v possible as I'm new to Swift and native iOS in general — usually use Unity & C#)
- Or, is this simply not supported? And if so, is this likely to change by time of release?
- Is there a document somehwere which indicates clearly which ARKit features can be used in conjunction with one another? (I could not find one.)