Inputs Updates to inputs on Apple Vision Pro let you decide if you want the user’s hands to appear in front of or behind the digital content.
Trying to understand why this is being introduced? Why would one corrupt the spatial experience by forcing your hands to appear in front of or behind digital content? Won't this be confusing to users? i.e., it should be a natural mixed reality experience where occlusion occurs when needed. if your "physical hand" is in front of a virtual obj, then it remains visible... and likewise, if you move it behind, then it disappears (not a semi-transparent view of your hand through the model).