Hello @peggers123 , thank you for your question!
You are right that AnchorEntities and ARSessions both have the capability to track a user's hand, but there are different use cases for each technology.
The Happy Beam sample is a good example of when to use ARKit and an ARSession for hand tracking. When playing Happy Beam, players make a heart shape with their hand to shoot a rainbow beam at storm clouds. To detect a heart shape, you need to track individual hand joints and do some matrix math to calculate their orientation relative to each other. If your app needs to detect very specific hand movements, you will need to use ARKit and not AnchorEntities.
In general, anchor entities are intended to be fixed points in space for you to anchor your content to. When you create an anchor entity, you can fix it to a specific point in the world and RealityKit will handle tracking and updating the position of the anchor entity for you, allowing you to attach other entities to your anchor entity to keep them in place in the user's environment. You can also fix it to a general location on the user's body, like their hand or head, but you don't have access to more specific locations like thumb knuckle or index finger tip.
Additionally, you don't need any extra permissions to create an anchor entity anchored to the user's body, however you would need to ask for permission from the user to track their hand using an ARSession. This is because anchor entities don't actually expose transform data to developers, and you'll notice the transform of an AnchorEntity is obscured by visionOS if you try to access it in code. If you need exact transform data of the user's body, you will need to start an ARSession with ARKit.
Let me know if this helps!