Best Apple device for Object Capture

I have tested Object Capture with the ios app and the command line tool on macos. I'm wondering what is the best Apple device to use to get the best quality (geometry and texture), there are several configurations that may not give the same results.

I have installed ios 15 on a 11 pro max. The ios app outputs some depth data.

Which cameras are used to compute the depth ? Does it use three cameras or two cameras ? If it uses only two cameras, what pair does it use ?

In theory for me, if only two cameras are used, the best configuration is tele and wide. I'm afraid with configuration with only wide and ultra wide, the results will be less accurate.

In short, can we get the same accuracy with an iphone 12 and with an ipad pro ? The ipad seems more ergonomic to measure an object instead of iphone.

Does the lidar of the iphone 12 pro/ipad pro can also be used to improve results ?

Replies

A device with a LiDAR Scanner will be able to get the most accurate data, since it can directly measure depth. However, you can still get great results without LiDAR by taking lots of pictures of the object.

  • Can you expand more about what you mean when u say "lots of pictures" ?

    The Capture App instructions say 20-200 pictures should be good, the more pictures I take the more accurate it will be?

Add a Comment

Thanks, good to know. By the way, the results are impressive.

My personal phone is a 12 pro max but I will not put iOS 15 for the moment. I have already played with stereo configuration on a 12 pro with my own software and the best quality was achieved by using the tele and the wide.

Can I expect to have the same results between a 12 pro and an iPad Pro M1. Since the iPad Pro M1 has only wide and ultra wide, I am afraid the quality will be less accurate. What do you recommend between a 12 pro and iPad Pro M1 ?

I used my iPhone 11 with only dual rear camera no lidar to test this app, the results quality are vary. My output folder has depth map and I’m assume that those depth maps were generated by two camera(normal and wide). The output quality really depends on a lot of things….light, movement, lens, overlapping percentage, object texture, object size, surface, background etc. I think lidar will definitely improve the output quality, but not sure if it actually work with camera when taking pictures. If you using iPhone12 pro, like how do you know if the dual camera or lidar provides depth data?