Cole from Scandy wrote a nice blog article about this a number of years ago. I don't think his mesh was textured though, so the method described may not yield the results you're looking for.
https://www.scandy.co/blog/how-to-export-simple-3d-objects-as-usdz-on-ios
Post
Replies
Boosts
Views
Activity
Re: @sducouedic's question, configuration changes aren't immediate. Add this delegate method to see that the sceneDepth semantic is enabled.
public func captureSession(_ session: RoomCaptureSession, didAdd room: CapturedRoom) {
if (session.arSession.configuration?.frameSemantics.contains(.sceneDepth)) != nil {
print("Scene depth is supported")
}
}
I'm seeing the same behavior and haven't found a solution
Interesting. My previous test did all the things you mentioned, but the captureHighResolutionFrame call that failed was made immediately after calling session.run. It works when moved to a tap handler.
Thanks
Always happens here :-/
Sweet, thanks for digging into that!
Thanks for the reply! I don't see anything having to do with export in UIDocumentPickerViewController class definition though, so how should the following be done in iOS 14?
UIDocumentPickerViewController(url: exportUrl, in: .exportToService)
Thanks again,
Jim
Thanks. Looking forward to the evolution of the RealityKit ecosystem!Best regards,Jim
One thing that's a blocker with RealityKit currently is the lack of support for custom geometry. We needed to show a shaded mesh during a reconstruction session, so we were driven back to ARSCNView which worked great, but of course requires abandoning ARView. Around 2:45 in this recently posted tech talk there's a comment: "We're overlaying the ARFrame image with a mesh being generated by ARKit using the LiDAR sensor... and the colors are based on a classification of what the mesh overlays" https://developer.apple.com/videos/play/tech-talks/609/It would be extremely helpful to know if this demo used ARSCNView!. Could you tell us how this was done? Best regards,Jim Selikoff
Not sure if this is the only problem, but I think that indexCount should be faces.count * faces.indexCountPerPrimitive.
Some discussions on Twitter indicate that the LiDAR depth data will not be accessible. Can you say anything about this?