Is there a way to lock down the scale of a model in AR quicklook?
Post
Replies
Boosts
Views
Activity
Hi,USD added support for Draco in 20.02. When can we expect to have this working on Apple devices as well?cheersMarkus
Hi,Been trying to use the command line usdzconvert script to do conversion straight from OBJ + textures to usdz, however the scaling is completely off. Even at 1000% scale up in AR view the model is still much smaller than the actual size.This also happens when we do the conversion from the gltf file instead of OBJ.This doesn't happen however when using the Reality Converter app.Has anyone else run into this? Can anyone from Apple reproduce this?cheersMarkus
USD added draco compression support in 19.11.
is this something that apple is also considering to adopt?
Will you make the GUI sample app that was used during the session available as well?
thanks!
Are both TIFFs and DNG (Apple ProRAW format) currently not supported?
Is there any developer documentation on how we can make use of this new 2x mode on the Iphone 14 Pro?
We are trying to save scene into usdz by using scene?.write method , which seems to work as expected until iOS 17.
in iOS 17 we are getting error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument which seems to be because of scenekit issue attaching StackTrace screenshot for reference
we have used updated method for url in scene?.write(to : url, delegate:nil) where url has been generated using .appending(path: String) method
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession.
After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS.
We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC.
As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation.
Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers.
We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
Anybody has noticed pivot issue in constructed model through object capture.
Ideally pivot of object should be centre of bounding box but with new macOS changes now pivot is at 0,0,0 (below the bounding box)
Here is a quick comparison
Old v/s new