Definitely you can. The whole city blocks and cathedrals are captured and reconstructed in 3D by photogrammetry software. That's how we have the whole real cities in Ubisoft games (i.e.). Archeologists and historians use it too. But you need tons of images (100 is very optimistically). 400+ and more. You might capture video of building form drone or satellite if you have one. Then convert video into image sequence and feed it into the photogrammetry. There are few very expensive software for it. But now we have native photogrammetry engine with full power of Apple's ML and metal.
It seems more logical to capture buildings by LiDAR in AR Kit. Just like in "3D Scanner" app and "Capture". But I doubt you're brave enough to attach your $1000 iPhone Pro MAX on drone. it's not possible to handle such huge amount of data in one AR session I guess. That's why scientists/cg artists use LIDARs to scan interiors and photogrammetry to reconstruct exterior landscapes.
Post
Replies
Boosts
Views
Activity
Photogrammetry uses still images (and additional data like gravity/depth) to find similar corresponding regions on several images simultaneously and reconstruct 3D object from it. It cannot be done on video stream because you just have no next "image" so to speak... Video stream usually contains only few full keyframes and stream of "changed pixels" to build next frames from previous keyframe and changes. That's why you might see "stuck" potions of videos sometimes when connection is weak or file is damaged. Frozen potions of screen "overwritten" by next frames with nice "glitch" effect. Until next keyframe with full picture. That's why photogrammetry session cannot compare several frames and detect corresponding regions to reconstruct 3D point-cloud and build a mesh for it.
You might convert you video into image sequence (command-line utilities or any video editing software) and feed its output into photogrammetry session (i.e. command-line HelloPhotogrammetry example).
It would be nice to have iOS app capturing video along with ARKit scene so each frame would ALREADY know its position in space. It will significantly simplify reconstruction process (and affect quality). There IS such app for camera path (in AR) along with video (to use in CG/compositing). But to combine both approaches is a bit tricky. Definitely it would be done in future. It's obvious that we need a synergy between ARKit and RealityKit (object capture/photogrammetry). But for this very moment Photogrammetry is MacOS only API while ARKit is iOS only...(I guess). Lidar+photogrammetry would give us astonishing results.
I have to mention that HelloPhotogrammetry already gives me astonishing results comparing to alternative apps/services and expensive CG software. I've got clean and most detailed result I ever seen before (as a professional CG artist) just from 40+ photos. Industrial 3D scanners probably will do better but we have zero of them in our pockets and on our tables :)
Could anyone tell me please WHERE is CaptureSample APP ? How can I get it ?