In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples.
In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created.
I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm).
But even with above same controlled variables, sometimes, it generate real scale, and sometimes not.
Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.