Hi again and
Thank you for your answer. You are lining out solutions for registration and the algorthms involved.
however we actually come even from a step earlier.
Let me clarify my problem:
We MANUALLY want to align a virtual object with its real counterpart by just moving the objects until they match on a table.
We wanted to test how precise you can be. (Ideally one can align the two objects intuitively just using the visualization up to 1 millimeter) In the process you would walk around the object to check from all sides if they really match precisely.
However what we found is: If you try to match those objects just based on the visualization the reality is displayed with a (for us) non-deterministic distortion as we walk around the object. So the virtual object seems to stay rock-solid, however reality seems to wobble around 2 cm.
So you basically never can finish the task because when your object matches from one perspective it wont match anymore from another.
Performing this usecase manually or just verifying the matching from different perspectives manually just using the visualization is very intersting to us.
We know this problem from other headsets as well. For example Quest Pro performed very good here. (Not talking about image quality, just the "wobbling") And we are pretty surprised that vision Pro pretty bad in this regard.
We currently used Unity and a shared environment. Next we want to try a pure native implementation. But I doubt that this will change something. (Although i hope there is something we can do on our side to improve the situation)