Hi,
I've been playing around with Object Capture API and am impressed by the results.
However, one thing that I've been struggling with is the orientation of the output model - I often find that the model is the wrong way around, and requires manual rotation to correct.
I've been capturing in portrait and the images have Orientation EXIF metadata. I've tried converting the images to PNG, which essentially removes the orientation metadata and stores the pixels the "right way around". Sometimes this produces better results, but not always.
I've also tried feeding in gravity vectors via PhotogrammetrySample. However, this seems to have no effect.
Does the Object Capture API use gravity vectors to determine the output model orientation? Likewise, does it use Orientation EXIF metadata?
Thanks
(Note: I'm currently using MacOS 12.5)