this would be especially useful for Object Capture photogrammetry models as well.
Post
Replies
Boosts
Views
Activity
Super helpful thread. How could we enable this programmatically via swift?
noticed reality composer pro has these settings we can enable for an asset.
we are also facing this issue and have filed this bug a long time ago. FB13157298. yet to receive a response from Apple on this. it has issues both with previous images captured with their sample app but also with images captured from their own native camera app.
no size limit but there is an image limit of 1000 images so anything larger than an elephant is probably a no go at good quality.
you could request for Apple to support scale bar detection. that way you could define a scale from that bar and get very good accuracy with Photogrammetry. Even the lidar currently has a 3cm variance which is too much if you are after high accuracy.
not supported in ObjectCaptureSession as confirmed by Apple directly. You can continue to use the macOS APIs for that though.
You could capture with an iphone side by side with your current setup and then apply the scale from the iphone model to the full model. technically the way scaling is done in most Photogrammetry sw is via scale bars that get detected by the software where you define your bounds and it then scales the models based on a few of these points in space that have a given distance defined. sadly there is no support for this in PhotogrammetrySession yet. maybe file a suggestion.
thanks @brandonK212 . I wish there would be a notification in place in case these samples get updated. Maybe good to file a feedback request for it.
The past betas there was issues with Metal that causes a lot of crashes for us. I was hoping this would be fixed in this release. I have to say it has been rather disappointing on how Apple has rolled out these new features so far. First it took 1 month for the sample code to land and then there was multiple issues with memory usage and performance on early betas. and then they entirely changed to the Observable Object Framework without any notice and or updating their sample code.
Yeah we are also getting a lot of random crashes. Most seem to be related to the Metal framework. I have reverted to Beta 2 for now which seems far more stable but sadly produces much inferior results in our testing.
still waiting for it as well. We were told by the end of this month.
it hasn't been released as of today. but I am sure once it is you can find it under https://developer.apple.com/sample-code/wwdc/2023/ (I hope :))
I do hope they will revise that limitation over time as it would open a massive use case for realtime object capture. I understand their privacy concerns for camera access in general but they could just limit it to certain APIs.
It's 2023 and that this is still not on Apple's radar now that they have a Spatial Headset is beyond me ...
It's there under File, New, Object Capture Model