visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

dismissWindow() doesn't dismantle View
We have discovered that our UIViewRepresentable view isn't being dismantled after its window is dismissed via dismissWindow(). This seems to result in a leak of our custom Coordinator class. Every time the user opens a new window, a new Coordinator is created; if the user then dismisses the window manually, or we dismiss it programmatically, the Coordinator remains in memory with no way to destroy it. Is this expected behavior? How can we be sure to clean up our Coordinator when the view's window is closed? Thanks.
0
0
147
2w
Cannot install release test (ad-hoc profile) on Vision Pro
I was able to setup a release test for an iOS app for distribution using a web server. It works perfectly fine for all the devices I registered for the deployment profile. However every time I try to distribute a Unity based Vision Pro application using the same process for building the package and set up for distribution it does not work. Safari only shows a message telling me: "Cannot connect to ." When trying to install the iOS app from the same server it shows the message "Do you want to install ?" and installation completes correctly. My iOS is a simple hello world app generated by Xcode. My Unity app is an AR app targeting com.apple.platform.xros. According to documentation there should not be any difference in deployment profiles/signing for iOS apps vs. visionOS apps. What am I doing wrong? Any hint is appreciated how to continue.
0
0
230
2w
send data or share variables between Immersive view and "attached" 2D view
I have an immersive space with a RealityKit view which is running an ARKitSession to access main camera frames. This frame is processed with custom computer vision algorithms (and deep learning models). There is a 3D Entity in the RealityKit view which I'm trying to place in the world, but I want to debug my (2D) algorithms in an "attached" view (display images in windows). How to I send/share data or variables between the views (and and spaces)?
1
0
194
2w
[ARKit] Is it possible remembering certain room using Room Tracking?
Hi! I'm making content using Room Tracking for vision pro these days. So I searched information about it. Here the links I visited. But I could not found the info I wanted to know Apple ARKit Create enhanced spatial computing experiences with ARKit RoomTrackingProvider I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again? For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature. After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room. Is this possible? If there are example codes or projects about it, please let me know.
1
0
301
2w
Xcode fails to compile visionOS app that has a Front layer in AppIcon asset
I'm trying to create the app icon for my visionOS app. The Assets catalog already contains AppIcon for iOS and I've added another AppIcon for visionOS. If I only add the Back layer of the visionOS icon, compiling succeeds despite there being an error The visionOS App Icon "AppIcon" must have at least 2 layers with applicable content. Although it has 3 layers, only 1 has applicable content. As soon as I add one of the other two layers, say the Front layer, compiling fails, but this time Xcode only shows a generic compiler error Command CompileAssetCatalogVariant emitted errors but did not return a nonzero exit code to indicate failure If I click that message, a long build log opens containing among other things: 2024-10-31 11:28:15.258 AssetCatalogSimulatorAgent[66919:1456355] -[TDTextureRawRenditionSpec _createImageRefWithURL:andDocument:format:] Texture image asset file:///~/Documents/apps/myApp/xcode/iOS/Assets.xcassets/AppIcon.solidimagestack/Back.solidimagestacklayer/Content.imageset/icon_layer3.heic not in one of supported formats ... libc++abi: terminating due to uncaught exception of type NSException Command CompileAssetCatalogVariant failed with a nonzero exit code What is the problem? I filed FB15642844.
0
0
118
3w
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
2
0
331
2w
[Apple Vision Pro] Issue with startDeviceMotionUpdates in ImmersiveSpace Mode
Hello everyone, I’m developing an app for Apple Vision Pro, and I’m trying to retrieve motion data updates aligned to magnetic north by using the following method: startDeviceMotionUpdates(using: .xMagneticNorthZVertical, to: .main) { ... } The goal is to get motion data oriented to magnetic north while an ImmersiveSpace() with an immersiveStyle set to .mixed is active. However, with this setup, I receive no updates at all. If I switch to: startDeviceMotionUpdates(using: .xArbitraryZVertical, to: .main) { ... } or startDeviceMotionUpdates(to: .main) { ... } then I do receive data, but it’s not aligned as required (I specifically need .xMagneticNorthZVertical). Has anyone experienced a similar issue, or does anyone know how to enable updates aligned to magnetic north in this configuration? Thanks in advance for any insights! SDK: VisionOS 2.0
1
0
178
3w
[Reality Composer Pro] Is it able playing timeline via Xcode without adding Behaviors component to entities?
Hi! Im making project with Xcode and Reality Composer Pro. I'm trying to play timeline in Reality Composer Pro using codes without setting Behaviors on entities. And I also tried to send notification from Xcode to entities in Reality Composer Pro to play timeline(I already set "OnNotification" with Behaviors component). But it's not working well, and I couldn't figure out any problems. Are there solutions about it?
1
0
306
3w
Why is my .heic spatial image is not working on Safari using Vision Pro?
I'm working on a school project to build a webpage for Vision Pro users. I'm using Xcode to build this webpage because it has .reality files. This webpage only works on Safari because I took a spatial image with my Vision Pro, and it's .heic file type. I put the .png version below the .heic file that is supposed to have the spatial effect. I deployed this project on Vercel, please use Safari to check out the link: https://spatial-design-project.vercel.app There's another issue in this project, I downloaded the Cosmonaut .reality file on Apple Quick Look Gallery to test on my webpage. However, when I open it on Vision Pro, the file won't load, it says "Failed to load layers". Does it have something to do with the server for serving this file type? Should I use an actual web hosting company for this website? Here is my GitHub repo <div class="hero"> <div class="hero-text"> <h1>La Sal Peak</h1> <p>The Do-It-All Enduro Bike</p> </div> </div> <div class="hero2"> <div class="hero-text"> <h1>La Sal Peak</h1> <p>The Do-It-All Enduro Bike</p> </div> </div> .hero { position: relative; display: flex; align-items: flex-end; justify-content: flex-start; height: 100vh; background: url("assets/heroImage.heic") no-repeat center center/cover; color: white; } .hero2 { position: relative; display: flex; align-items: flex-end; justify-content: flex-start; height: 100vh; background: url("assets/hero.png") no-repeat center center/cover; color: white; }
0
0
188
3w
Attaching VideoMaterial to DockingRegion
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment. It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video. So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController. The latter is not an option as I need custom controls.
0
0
203
3w
Synchronizing Physical Properties of EntityEquipment in TableTopKi
I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players. Current Approach and Limitations I have tried setting EntityEquipment's state to DieState and treating it as a TossableRepresentation object. This approach achieves basic physical properties synchronized across players. However, it has several limitations: No Collision Detection Between Dice: Multiple dice do not collide with each other. Shape Limitations: Custom shapes, like parallelepipeds, cannot be configured. Below is my existing code for Base Entity Equipment without physical properties: struct CubeWithPhysics: EntityEquipment { let id: ID let entity: Entity var initialState: BaseEquipmentState init(id: ID, entity: Entity) { self.id = id self.entity = entity initialState = .init(parentID: .tableID, pose: .init(position: .zero, rotation: .zero), entity: self.entity) } } I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
4
4
363
3w
Determining if an ObjectAnchor is currently observed
I'm writing code using ObjectAnchor for Vision OS. If an object is tracked, and then becomes not visible (either because the user looked in a different direction, or because the tracked object was occluded by another object), it is still tracked and you get anchor updates (e.g., object permanence). For my application, it would be very helpful if I could determine if the object is currently being observed, or it is not currently observed and just assumed to be in the same location as seen previously. ObjectAnchor.isTracked just seems to indicate whether it is getting anchor updates. I don't see anything in the ObjectAnchor or AnchorUpdate that would allow me to determine if the object is currently observed. Does anyone know of a way to do this, or would this be a feature request?
2
0
153
3w
Xcode 16 crashes my Vision Pro Object Tracking App
I created an Object & Hand Tracking app based on the sample code released here by Apple. https://developer.apple.com/documentation/visionos/exploring_object_tracking_with_arkit The app worked great and everything was fine, but I realized I was coding on Xcode 16 beta 3, so I installed the latest Xcode 16 from the App Store and tested by app there, and it completely crashed. No idea why. Here is the console dyld[1457]: Symbol not found: _$ss13withTaskGroup2of9returning9isolation4bodyq_xm_q_mScA_pSgYiq_ScGyxGzYaXEtYas8SendableRzr0_lF Referenced from: <3AF14FE4-0A5F-381C-9FC5-E2520728FC65> /private/var/containers/Bundle/Application/F74E88F2-874F-4AF4-9D9A-0EFB51C9B1BD/Hand Tracking.app/Hand Tracking.debug.dylib Expected in: <2F158065-9DC8-33D2-A4BF-CF0C8A32131B> /usr/lib/swift/libswift_Concurrency.dylib It was working perfectly fine on Xcode 16 beta 3, which makes me think it's an Xcode 16 issue, but no idea how to fix this. I also installed Xcode 16.2 beta (the newest beta) but same error. Please help if anyone knows what is wrong!
1
0
251
3w
WebXR Immersive AR Support & DOM Overlays
As mentioned in https://forums.developer.apple.com/forums/thread/756736?answerId=810096022#810096022 Is there any update about the full support to WebXR AR Module, which should enable immersive-ar mode? Are the features such as DOM overlays and WebGPU bindings on the roadmap? Is it possible to capture stereoscopic video either internally or externally or via airplay for debugging purposes? Thanks
0
2
254
3w
Enterprise IPA install from web fails with "incompatible platform: com.apple.platform.xros"
I am trying to set up a workflow where Apple Vision Pro users in my organization can install a signed enterprise .ipa file from an internal web page. The relevant link looks something like this: &amp;lt;a role="button" href="itms-services://?action=download-manifest&amp;amp;url=https://my.example.com/path/manifest.plist"&amp;gt;Click here to download&amp;lt;/a&amp;gt; After verifying that all the mime types were correct on the server and the certificate was valid, I finally attached my AVP headset to my Mac's console app and saw that the errors look like this: [com.example.myapp] Skipping due to incompatible platform: com.apple.platform.xros Could not load download manifest with underlying error: Error Domain=ASDErrorDomain Code=752 "Not compatible with this platform: com.apple.platform.xros" UserInfo={NSDebugDescription=Not compatible with this platform: com.apple.platform.xros} This manifest.plist was made by the "Distribute App" workflow in Xcode 16.0. Multipart question: Is installing VisionOS apps via manifest+ipa over a web connection a supported way of installing apps? If the issue is with com.apple.platform.xros, what should be the platform-identifier for VisonOS apps?
0
0
359
3w