Posts

Post not yet marked as solved
4 Replies
2k Views
Hello,I've been able to run the body tracking code example with the skeleton tracking a person's movement. I would like to add People Occlusion to this scenario. The code example depends on the ARBodyTrackingConfiguration subclass of ARConfiguration. After callingARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentation)orARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)I got the value 'false' for both. To double check, I have tried to turn on People Occlusion by setting the the frameSemantics of the sessionvar config = ARBodyTrackingConfiguration() config.frameSemantics.insert(.personSegmentation) // or config.frameSemantics.insert(.personSegmentationWithDepth)But this leads to a run-time exception complaining about the frameSemantics options I've set. -----I've seen that the ARWorldTrackingConfiguration supports .personSegmentation and .bodyDetection (according to the .supportsFrameSemantics( ) method), so I tried to achieve body tracking + people occlusion that way. I've noticed these two frameSemantics options cannot be turned on at the same time with an ARWorldTrackingConfiguration (it causes another runtime exception). Despite this, the method .supportsFrameSemantics() return true for both .personSegmentation and .bodyDetection. If I use the ARWorldTrackingConfiguration and only turn on .bodyDetection frameSemantics, there are no runtime exceptions but the session isn't returning any ARBodyAnchors, as in the original 3D example (see below). "When ARKit identifies a person in the back camera feed, it calls session:didAddAnchors:, passing you an ARBodyAnchor you can use to track the body's movement." Source: https://developer.apple.com/documentation/arkit/arbodytrackingconfiguration-----------------------------Am I missing something obvious? Is it possible to somehow do People Occlusion and Body Tracking at the same time? If I want to achieve body tracking, must I use the ARBodyTrackingConfiguration subclass or is there some other way to turn on the .bodyDetection frameSemantic enum using a different subclass of ARConfiguration?EDIT: If it is not currently possible, is this something Apple intends to support in the future?
Posted
by xand.
Last updated
.
Post not yet marked as solved
0 Replies
412 Views
Hello,I want to build a demo app in ARKit and I have some questions about what is currently possible with the beta.The demo app should do the following: 1) identify an object or image in the real environment, and create an anchor there2) render a virtual model attached to the anchor3) have the virtual model presented with occlusion4) have the virtual model move along with the anchor image / objectThe way I understand it, items 1) and 2) are possible with ARKit 2.0. Item 3) is advertised as possible with the beta, but I see little to no documentation.Is this possible to do in the latest beta? If so, what is the best approach?If not, are there any workarounds like mixing the old and new APIs or something? Thank you in advance.
Posted
by xand.
Last updated
.
Post not yet marked as solved
0 Replies
381 Views
Hi,I might be incorrect in my current understanding, so if I am, please let me know. The way I understand things currenttly, there is still a lot of work being done on RealityKit. There are features that were part of ARKit 2.0 (using SceneKit) such as object recognition, image recognition, that don't seem to have counterparts in RealityKit yet. Am I correct in that? In reading about the new Scene, Entity, Component, Resource datatypes I have seen references to anchors and targets of different types. For example, while reading about AnchorEntity and AnchoringComponent.Target types (see links below),https://developer.apple.com/documentation/realitykit/anchoringcomponentWe see that there is a property/member called anchorIdentifier. The explanation for this property is: 'The identifier of the AR anchor with which the anchor entity is associated, or nil if it isn’t currently anchored' ------ If I want to identify an image or object in the real environment tand then use that as an anchor to render other things, has this functionality been ported or included in RealityKit yet? Presumably I would have to get an AR anchor from somewhere 'under the hood' in order to render content that is not simply placed relative to the camera origin? Thank you in advance. I'm not so much looking for spoon-fed code, but want to make sure I understand the current state of RealityKit.
Posted
by xand.
Last updated
.