Posts

Post not yet marked as solved
1 Replies
2.1k Views
I was starting to test visionOS SDK on an existing project that has been running fine on iPad (iOS 17) with Xcode 15. It can be configured to run on visionOS simulator on a MacBook that runs M1 chip without any change in Xcode’s project Build Settings. However the Apple Vision Pro simulator doesn’t appear when I run Xcode 15 on Intel MacBook Pro, unless I change the SUPPORTED_PLATFORMS key on the Xcode’s project Build Settings to visionOS. Although, I can understand that a MacBook pro running M1 / M2 chip would be the ideal platform to run the visionOS simulator, it’s so much better if we can run the visionOS simulator on iPadOS, as it has the same arm64 architecture, and it has all the hardware needed to run camera, GPS, and Lidar. The Mac is not a good simulator, even though it has an M1 / M2 chip, first of all: It doesn’t have a dual facing camera (front and back) It doesn’t have a Lidar It doesn’t have GPS It doesn’t have a 5G cellular radio It’s not portable enough for developers to design use cases around spatial computing Last but not least, I have problems or not very clear on simulating ARKit with actual camera frames on a VisionPro simulator, while I would estimate this can be simulated perfectly on an iPadOS. My suggestion is to provide us developers with a simulator that can be run on iPadOS, that will increase developers adoption and improve the design and prototyping phase of apps running on the actual Vision Pro device.
Posted
by ja_tectus.
Last updated
.
Post not yet marked as solved
1 Replies
921 Views
I'm trying to test the RoomPlan exporter with ModelProvider. I ran the sample code, and tried loading from the existing room plan samples of Merging Multiple Scans. Then after I open the exported USDZ output, none of the objects are being substituted with the objects inside RoomPlanCatalog.bundle. I have debugged and checked that the Catalog is loaded correctly. What is the specification of providing a catalog for RoomPlan's ModelProvider? As it's not clear what we should do regarding that from the documentation. Shouldn't there be a generic 3D mesh provided for objects such as Chair, Table, and Shelf, but I couldn't see any of them loaded. Although chair could be easily detected during the scan process, as I observed the session delegates. Tried configuring the captureSession.arSession with SceneReconstruction configuration, but there's no difference. It looks like the issue is only on the exporting process.
Posted
by ja_tectus.
Last updated
.
Post not yet marked as solved
0 Replies
548 Views
Hello, I understand that photogrammetry result is great with undistorted perspective image. Has anyone uses the photogrammetry APIs with raw fisheye images such as those from recently manufactured 360 cameras, and its equirectangular projection? Is there and will there be support for photogrammetry pipeline of fisheye / equirectangular images?
Posted
by ja_tectus.
Last updated
.
Post not yet marked as solved
4 Replies
1.1k Views
Hello, I have been experimenting with saving and loading ARWorldMap, especially by following this guide: Saving and Loading World Data - https://developer.apple.com/documentation/arkit/world_tracking/saving_and_loading_world_data However I'm using RealityKit and not using SceneKit, I also want to add an AnchorEntity on various locations that can be automatically detected from detected anchors and the ones that I create by tapping and raycasting on the camera frame. I can successfully achieve what I've explained above, but there is a lack of guideline in loading multiple anchors at the beginning of an app launch, or when the ARSession delegate is called. In particular, these methods in ARSessionDelegate - https://developer.apple.com/documentation/arkit/arsessiondelegate: [session(_:didAdd:) ](https://developer.apple.com/documentation/arkit/arsessiondelegate/2865617-session) [session(_:didUpdate:) ](https://developer.apple.com/documentation/arkit/arsessiondelegate/2865624-session) One way is to add them directly to a Scene with addAnchor(_:) - https://developer.apple.com/documentation/realitykit/scene/3366531-addanchor. However this incurs a performance penalty on drawing the AR camera frame, when those anchors are repeatedly being updated and new ones being added. I'm only adding a sphere mesh. This is especially slow on devices such as iPad Pro 9-7 inch (2016) with A9X chip. Any tips for a performant and asynchronous rendering of anchors?
Posted
by ja_tectus.
Last updated
.