Posts

Post not yet marked as solved
2 Replies
837 Views
The sample application and code does not seem to be available in the WWDC app or in the documentation along with other Object Capture samples. Where/when will this be released?
Posted
by sam598.
Last updated
.
Post not yet marked as solved
8 Replies
959 Views
This error is reported as soon as a PhotogrammetrySession starts processing. libc++abi: terminating with uncaught exception of type std::runtime_error: Failed to access model resource path This happens both with a custom application and the provided example command line demo.
Posted
by sam598.
Last updated
.
Post not yet marked as solved
1 Replies
1.1k Views
For apps built specifically for the new Lidar sensor that have little to no use on devices without it, is there an appropriate Required device capabilities string?
Posted
by sam598.
Last updated
.
Post not yet marked as solved
1 Replies
725 Views
The new object capture API is really quite remarkable, and I can already think of several possible use cases and pipelines for it. It is great that even though it can work with just 2D images the API can use metadata like a depth maps, lens data, gravity direction, and GPS location to create a better understanding of the scene and reconstruction. In a situation where the rough camera pose of each captured image is known (for example an ARKit tracked frame, or a static array of cameras) I would love the ability to add an initial camera transform matrix to each PhotogrammetrySample. Without assuming too much about how the underlying system works, I assume that the camera extrinsics and intrinsics are constantly being refined as the model is being reconstructed. In this situation I would not expect the input camera poses to be taken as absolute values. But I think being able to give those initial matrix transform values would have several benefits: Give an even better hint for the camera positions than just the gravity vector would alone. Define object scale based on camera extrinsics, even if no depth data is available. Predefine a coordinate space, origin and orientation for the capture. Have a common and consistent origin point and orientation between scans with an identical (or similar) camera setup. Without having tried a drone capture yet perhaps there is a way to do this with GPS data. But that feels like an unnecessary hack-a-round, and likely prone to conversion errors.
Posted
by sam598.
Last updated
.
Post not yet marked as solved
4 Replies
2.1k Views
After uploading a build with Xcode 11 Beta 2, and submitting to TestFlight external testing for review, this error appears:"Sorry, something went wrong. This build is using a beta version of Xcode and can’t be submitted. Make sure you’re using the latest version of Xcode or the latest seed release found in the TestFlight release notes."The latest TestFlight release notes state:"You can now submit apps built with Xcode 11 beta 2 using the SDK for iOS 13 beta 2, tvOS 13 beta 2, and watchOS 6 beta 2 for internal and external testing."Is there a discrepency, or is this feature not live yet? Thanks!
Posted
by sam598.
Last updated
.