Post not yet marked as solved
Click to stop watching this thread.
You have stopped watching this post. Click to start watching again.
contentPostList.repliesup-voted.tooltip
Hiya,IPerhaps the copy on the Product page isn't as plain as it could be. My understanding from testing the APIs and watching the WWDC sessions is this :People Occlusion requires A12 and A12X Bionic Chips (or later)Motion Capture requires A12 and A12X Bionic Chips (or later)Multiple face tracking requires a A12 or A12X Bionic chip AND a TrueDepth CameraSimultaneous Front and Back Camera usage requires A12 or A12X Bionic chip AND a TrueDepth Camera (required for the facial tracking element of the simultaneous usage)ANE is the "Apple Neural Engine", a co-processor in the System On A Chip that Apple uses for machine learning inference on device (presumably a highly optimised matrix co-processor).Motion Capture (as a skeleton) works using the outward (rear) facing camera. Face tracking requires the TrueDepth camera and uses the 'selfie' camera.If you look at the API for RealityKit, all the samples are there : https://developer.apple.com/documentation/realitykitHope that helps.