Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

How to present an alert in visionOS immersive space?
My visionOS app uses an immersive view. If the app encounters an error, I want to present an alert. I tried in a demo app to present such an alert, but it is not shown. Nearly the same code on iOS presents an alert window. Here is my demo code, based on Apple's Immersive Environment App template: import SwiftUI import RealityKit import RealityKitContent struct ErrorInfo: LocalizedError, Equatable { var errorDescription: String? var failureReason: String? } struct ImmersiveView: View { @State private var presentAlert = false let error = ErrorInfo( errorDescription: "My error", failureReason: "No reason" ) var body: some View { RealityView { content, attachments in let mesh = MeshResource.generateBox(width: 1.0, height: 0.05, depth: 1.0) var material = UnlitMaterial() material.color.tint = .red let boardEntity = ModelEntity(mesh: mesh, materials: [material]) boardEntity.transform.translation = [0, 0, -3] content.add(boardEntity) } update: { content, attachments in // … } attachments: { // … } .onAppear { presentAlert = true } .alert( isPresented: $presentAlert, error: error, actions: { error in }, message: { error in Text(error.failureReason!) }) } } Since I cannot see any alert, is something wrong with my code? How should an alert be presented in immersive space?
1
0
225
4w
[ARKit] Is it possible remembering certain room using Room Tracking?
Hi! I'm making content using Room Tracking for vision pro these days. So I searched information about it. Here the links I visited. But I could not found the info I wanted to know Apple ARKit Create enhanced spatial computing experiences with ARKit RoomTrackingProvider I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again? For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature. After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room. Is this possible? If there are example codes or projects about it, please let me know.
1
0
376
Nov ’24
Detecting collisions between fingertip and world mesh
I'm using hand tracking to detect collisions between fingertips and entities that I have placed in the scene. I'm using the .mixed environment. However, I want to detect when a fingertip touches a real-world object such as a wall. No matter what I try, I can't get the collision to fire. I'm using the SceneReconstructionProvider to give me world meshes, which I use to create ModelEntity objects to which I add a CollisionComponent with the shape of the object. I can render the meshes just fine, but nothing I do seems to allow collisions to work. Surely this is possible, what am I missing?
4
0
237
Oct ’24
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
2
0
371
Oct ’24
dismissWindow() doesn't dismantle View
We have discovered that our UIViewRepresentable view isn't being dismantled after its window is dismissed via dismissWindow(). This seems to result in a leak of our custom Coordinator class. Every time the user opens a new window, a new Coordinator is created; if the user then dismisses the window manually, or we dismiss it programmatically, the Coordinator remains in memory with no way to destroy it. Is this expected behavior? How can we be sure to clean up our Coordinator when the view's window is closed? Thanks.
0
1
169
Nov ’24
Object Occlusion in Non LiDAR devices
Hi, I'm currently working on an ARKit project where I need to implement object occlusion on devices that do not have a LiDAR sensor (e.g., iPhone XR, iPhone 11). I used CoreML models like DepthAnythingV2 to create depth maps and DETRResnet50SemanticSegmentationF16P8 to to perform real-time segmentation. But these models are too heavy for devices. Much appreciated on any advice or pointers to resources.
2
0
375
Oct ’24
VisionOS 2.0 App Crashes Faster with Each Upload from XCode
We developed an app for VisionOS 2.0 Beta in XCode 16 Beta. The development was done in the beta versions since we needed features for our app which were not available in VisionOS 1.0, which was the most recent stable release at the time we developed the app. The app was fully functional on our AVP running VisionOS 2.0 Beta Version 5, and we never had any errors. We did not publish the app to the app store since we are a research lab using the app for teleoperating a custom robot. Last week, we upgraded the AVP from VisionOS 2.0 Beta Version 5 to VisionOS 2.0 (stable release). Unfortunately, once we upgraded to 2.0, we began to have an issue with the app. While the app is running, at seemingly random times, without any new functionality being used within the app (no new buttons being pressed, etc), we encounter the following console error: assertion failure: 'index < m_size' (operator[]:line 1011) Index out of range. index = 18446744073709551615, size = 0 We could re-upload the app to the AVP and successfully operate the app for several minutes until the same error occurred again. We thought to use Apple Configurator to flash VisionOS 2.0 Beta 5 to the AVP since the error wasn't happening on the previous firmware, but we were unable to flash a beta version of VisionOS via Apple Configurator, so we simply performed a factory reset of the device (on VisionOS2.0, by pressing restore in Apple Configurator with the AVP connected via the developer strap) to see if this might fix the issue. After doing a factory reset, we thought the console error completely went away. We were able to operate the app for ~3 hours on Sunday with no issues. Then, yesterday (Monday) we operated the app for another 2 hours, and at the very end of using the app, it crashed with the same error. We re-uploaded the app with XCode, and the error occurred again after about 20 mins of using the app. This cycle repeated, and every time we re-uploaded the app, the time it took for the error to occur decreased, until we uploaded the app and the error occurred in <20 seconds. We decided to test our hypothesis by upgrading VisionOS to 2.1 and using XCode 16. Similarly, we were able to run the app on the AVP for 2 hours, then the error occurred. The next time we ran the app, the error occurred within 20 minutes, then after reloading, occurred within 5 mins, then 2 mins, etc. We are pretty stumped on why the app would work after a factory reset or a firmware upgrade for hours, then fail faster and faster every time we re-upload the app from Xcode. We are not experienced in debugging Swift and ObjC, so we wanted to inquire if this is an issue that you have ran into before to point us in the right direction. We think that it could be a problem with the cached memory that persists on the device across uploads from Xcode, but that's the extent of our understanding. P.S., we also experienced this error during some of the app failures, but the one above is the most common: assertion failure: Index out of range (operatorl]:line 858) index = 576460752303423487, max = 1
3
0
259
Oct ’24
[Apple Vision Pro] Issue with startDeviceMotionUpdates in ImmersiveSpace Mode
Hello everyone, I’m developing an app for Apple Vision Pro, and I’m trying to retrieve motion data updates aligned to magnetic north by using the following method: startDeviceMotionUpdates(using: .xMagneticNorthZVertical, to: .main) { ... } The goal is to get motion data oriented to magnetic north while an ImmersiveSpace() with an immersiveStyle set to .mixed is active. However, with this setup, I receive no updates at all. If I switch to: startDeviceMotionUpdates(using: .xArbitraryZVertical, to: .main) { ... } or startDeviceMotionUpdates(to: .main) { ... } then I do receive data, but it’s not aligned as required (I specifically need .xMagneticNorthZVertical). Has anyone experienced a similar issue, or does anyone know how to enable updates aligned to magnetic north in this configuration? Thanks in advance for any insights! SDK: VisionOS 2.0
1
0
195
Oct ’24
How to Play Timeline Animations via code
Hi everyone, I need to synchronize the playback of RealityKit Timelines via SharePlay. To do this I am trying to get the references of the timelines using "AnimationPlaybackController" and "AnimationResource". In my realitykit scene I have configured both an animation (with blender), and a timeline, the animation starts correctly when the realitykit scene starts, the timeline not. Below the code: struct ContentView: View { @State private var subscriptions = [EventSubscription]() @Environment(AppModel.self) private var appModel let rootEntity = Entity() @State var testEntity: Entity? @State var testAnimation: AnimationResource? @State var testController: AnimationPlaybackController? init() { CubeComponent.registerComponent() } var body: some View { RealityView { content in content.add(rootEntity) if let scene = try? await Entity(named: "Room", in: realityKitContentBundle) { rootEntity.addChild(scene) playAnimations(from: content) } } .gesture(SpatialTapGesture().targetedToAnyEntity() .onEnded({ value in _ = value.entity.applyTapForBehaviors() if let testEntity, let testAnimation { testController = testEntity.playAnimation(testAnimation.repeat()) } }) ) } func playAnimations(from content: RealityViewContent) { subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: AnimationLibraryComponent.self, { event in let entity = event.entity entity.components[AnimationLibraryComponent.self]?.animations.forEach({ (key, value) in if value.definition is AnimationGroup { if key == "/Room/TestTimeline" { let controller = entity.playAnimation(value.repeat()) testEntity = entity testAnimation = value appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name) } } else { if entity.name == "SphereInteractable" { let controller = entity.playAnimation(value.repeat()) appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name) } } }) })) } } the variables testEntity, testAnimation and testController are for testing purposes only. If I try to start the animations in the playAnimations function, only the animation created via blender starts (the one related to the object "SphereInteractable"), the Timeline starts only if I save a reference and I play it with a tap gesture or with a delay of ! seconds with DispatchQueue.asyncAfter called in the onAppear. is there a better way to handle this? The goal is to have a reference of the AnimationPlaybackController of the timeline, in order to sync the animation via shareplay. Thanks
3
0
282
Oct ’24
Can we prevent Apple Vision Pro from turnning off the display and going to sleep as long as take off Apple Vision Pro?
In some cases, we will put the Apple Vision Pro back on for a short period of time after taking it off, such as less than 1 minute, and want to keep it activte when we take it off so that it can continue to work seamlessly when we put it on again. The current Apple Vision Pro turns off the display and goes to sleep whenever it is been taken off. This feature is also explained in the user guide, to save power, for safety, etc. However, we want a seamless experience! Is it possible to have a screen saver like macOS or iOS, where we can set our own delay time to go sleep? Or is there any API that can be called to prevent going to sleep?
0
0
250
Oct ’24
3D display from 3D camera
I want to pursue for a project involving 3D VR visualisation. I would like to know if there is a 3D stereoscopic camrea setup that is able to connect straight to Apple Vision Pro display yet. Of course, with no compatibility issues with MV-HEVC. Any recommendation is appreciated.
0
0
217
Oct ’24
Handling user-initiated re-centering in group immersive space?
Hi, currently tinkering with a little shareplay app for the Vision Pro that allows people to facetime and shareplay to play with random 3d models (as well as move them around, which should sync the model positions for everyone in relative space). When the users start their facetime call, then open the immersive space to see the 3d models, the models load in properly in context of the group immersive space's coordinate system, and moving the models reflects the new positions real-time for each participant. The main issue comes if/when users use the digital crown to re-center their view. It appears to re-center the model and view, which is expected. However, it also seems to re-position the model/root entity to match the user's origin. Not sure if this is intentional or not, but this essentially makes it so that it "de-syncs" the model (so me moving the model next to someone does not reflect it 1:1 - it still moves properly, but the new "initial" position after re-centering makes it offset). Is there a potential solution or work-around for this such that re-centering the view doesn't de-sync the model/entity's position? Rough code for my RealityView component is below: RealityView { content, attachments in content.add(appModel.originEntity) appModel.originEntity.addChild(appModel.modelContainerEntity) appModel.setInitialModelPosition() configureGestures(forModel: appModel.modelContainerEntity) configureToolbarAttachment(content: content, attachments: attachments) } update: { content, _ in // I have modified the Apple provided gesture components to // send the app model the new positions/rotations // as well as broadcast the position/rotation to shareplay participants // When user re-centers view, it seems to also re-position the model // so that its origin is at the local user's origin, rather than // the original origin // Can we receive a notification that user has re-centered view? // Or some other work-around? appModel.modelContainerEntity.setPosition(appModel.modelState.position, relativeTo: nil) appModel.modelContainerEntity.setOrientation(.init(appModel.modelState.rotation3d), relativeTo: nil) } attachments: { Attachment(id: "customViewAttachment") { CustomView() } } .installGestures() Please let me know if anything wasn't clear or if more information is needed. Thanks!
4
0
286
Oct ’24
Proper way of handing opening ImmersiveSpace?
if you check the code here, https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough var body: some Scene { ImmersiveSpace(id: Self.id) { CompositorLayer(configuration: ContentStageConfiguration()) { layerRenderer in let pathCollection: PathCollection do { pathCollection = try PathCollection(layerRenderer: layerRenderer) } catch { fatalError("Failed to create path collection \(error)") } let tintRenderer: TintRenderer do { tintRenderer = try TintRenderer(layerRenderer: layerRenderer) } catch { fatalError("Failed to create tint renderer \(error)") } Task(priority: .high) { @RendererActor in Task { @MainActor in appModel.pathCollection = pathCollection appModel.tintRenderer = tintRenderer } let renderer = try await Renderer(layerRenderer, appModel, pathCollection, tintRenderer) try await renderer.renderLoop() Task { @MainActor in appModel.pathCollection = nil appModel.tintRenderer = nil } } layerRenderer.onSpatialEvent = { pathCollection.addEvents(eventCollection: $0) } } } .immersionStyle(selection: .constant(appModel.immersionStyle), in: .mixed, .full) .upperLimbVisibility(appModel.upperLimbVisibility) the only way it's dealing with the error is fatalError. And don't think I can throw anything or return anything else? Is there a way I can gracefully handle this and show a message box in UI? I was hoping I could somehow trigger a failure and have https://developer.apple.com/documentation/swiftui/openimmersivespaceaction return fail. but couldn't find a nice way to do so. Let me know if you have ideas.
1
0
284
Oct ’24
iOS 18.1 Rc
I’ve updated to iOS 18.1 public beta when it released and I got the problem when I was in public beta 4 to rc and the problem never fixed with the time limit when I ask for more time it never give me more time it’s just says waiting for parent approval I have this problem until ios 18.1 RC can u fix it I’m on iPhone 11 pro max
1
0
353
Oct ’24
[Reality Composer Pro] Is it able playing timeline via Xcode without adding Behaviors component to entities?
Hi! Im making project with Xcode and Reality Composer Pro. I'm trying to play timeline in Reality Composer Pro using codes without setting Behaviors on entities. And I also tried to send notification from Xcode to entities in Reality Composer Pro to play timeline(I already set "OnNotification" with Behaviors component). But it's not working well, and I couldn't figure out any problems. Are there solutions about it?
1
0
376
Oct ’24
Determining if an ObjectAnchor is currently observed
I'm writing code using ObjectAnchor for Vision OS. If an object is tracked, and then becomes not visible (either because the user looked in a different direction, or because the tracked object was occluded by another object), it is still tracked and you get anchor updates (e.g., object permanence). For my application, it would be very helpful if I could determine if the object is currently being observed, or it is not currently observed and just assumed to be in the same location as seen previously. ObjectAnchor.isTracked just seems to indicate whether it is getting anchor updates. I don't see anything in the ObjectAnchor or AnchorUpdate that would allow me to determine if the object is currently observed. Does anyone know of a way to do this, or would this be a feature request?
2
0
170
Oct ’24
Attaching VideoMaterial to DockingRegion
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment. It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video. So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController. The latter is not an option as I need custom controls.
0
0
227
Oct ’24