Understanding ideal overlaps in Object Capture

Hi, I’ve been watching quite some stuff about Object Capture months afar from now and how it works, and I’m wondering how I am supposed to interpret the ideal overlaps between sequential photos. Apple says it should be at least 70% between each photo, but since the ideal overlap isn’t a tangible concept, how am I supposed to take it into account when I take photos. Any help appreciated. By the way, here’s the article mentioning the overlap, near the end of the latter. https://developer.apple.com/documentation/realitykit/capturing-photographs-for-realitykit-object-capture

Yes but what do you want to accomplish with this? What's the goal? If you're looking for object recognition of an real world object inside an ARKit Scene look here. Just download the App, load in on your iPhone via Xcode and start scanning. Later you can set an anchor when composing your scene in Reality Converter. Hope that helps.

Understanding ideal overlaps in Object Capture
 
 
Q