Hi,
I built mine with arkit. Interestingly my depth is correct And scale is correct, but my orientation is wrong. I tried feeding in gravity data via makerapple dictionary using “8” . That is where Apple secretly stores its metadata for gravity.
I also tried writting in metadata for intrinsics and extrinsics but I don’t know if object capture is actually using it.
regards,
peter
Post
Replies
Boosts
Views
Activity
Hi HeoJin,
I think there might be some information here... https://developer.apple.com/documentation/imageio/image_i_o_constants
I noticed since iOS 16 apple added these extra things to imageio:
kIIOCameraExtrinsics_CoordinateSystemID: CFString
kIIOCameraExtrinsics_Position: CFString
kIIOCameraExtrinsics_Rotation: CFString
kIIOCameraModelType_GenericPinhole: CFString
kIIOCameraModelType_SimplifiedPinhole: CFString
kIIOCameraModel_Intrinsics: CFString
kIIOCameraModel_ModelType: CFString
kIIOMetadata_CameraExtrinsicsKey: CFString
kIIOMetadata_CameraModelKey: CFString
I think they are the same names that appear in the metadata of the image when you view one of the images from the guided capture sample app in photos on Mac OS....
I cannot figure out how to write in these fields in metadata... There doesn't seem to be any dictionary like there is for exif or gps....
apple engineers... help us!
Hi - Did you find out how to overcome the issue?
I noticed that object capture api is embedding rotation, position , intrinsic information into the HEIC metadata....
I know it is possible to get this information if you user SceneKit and ARKit in iOS so I am actually interested in: How to insert this metadata into the correct metadata dictionary key to embed into the Heic Image.
Arkit is amazing.... it lets you access other things like lidar depth maps (not disparity... actual float data).