Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

AR anchor shared across multiple immersive scenes
Hello, I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces? About my app: There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user. If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
1
0
84
3d
RoomCaptureSession persistence, ARSession pause broken?
Hi all, Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described. The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs. These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded(). Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described. Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
0
3
79
3d
Post processing in VisionOS
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
0
0
54
3d
Xcode 16 cannot load AR scenes from .rcproject files
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS. The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file How do I submit a ticket to Apple?
0
0
99
4d
INFOS FROM CMSSampleBuffer
==> Which information will I get from CMSSampleBuffer ? Is there an option to block close up accomodation of the camera ? Is there a way for the object capture module to take a video instead of a series of picture ? It would be fantastic to have an answer on all of these questions to be able to move forward on new implementations.
0
0
114
5d
Access persona on VisionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
3
0
652
Apr ’24
Access to Raw Lidar point cloud
Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using: guard let frame = arView.session.currentFrame, let depthData = frame.sceneDepth?.depthMap else { print("Depth data is unavailable.") return } but this is the depth data after sensor fusion occurs and fails in low light conditions.
4
0
269
2w
ObjectCapture from ARKit
We are currently using ObjectCapture from ARKit, and we would like to fix exposure time, white balance parameter and ISO. How can we do this ? Additionally, we'd like to obtain the following information from the ARKit : white balance parameters (in case we cannot fix them) and color correction matrices ?
0
0
138
1w
ARKit AnchorUpdate<ImageAnchor>.event Behavior Changes in visionOS 2.1
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
2
0
183
2w
Eye Difference in Object Tracking
Hi all, I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2. Thanks!
1
0
171
2w
How to insert modified video frames into the system camera to achieve AI erase effect?
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera. So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user? This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem? STEPS TO REPRODUCE Obtain video frames, Process the obtained video frames Insert the processed video frames into the VisonOS system camera. System: VisionOS 2.0 API used: Enterprise APIs Main camera access permissions
1
0
224
2w
[ARKit, Reality Composer Pro] Is it possible loading Immersive Scene after recognizing preregistered images?
Hi! I wanna know that if it's possible that loading Immersive Scene after scanning(recognizing) preregistered images or objects? I tried to load the Immersive scene after scanning image and objects, it didn't work well. Please let me know about the solution if it's possible. Here the ImmersiveView.swift code i tried. // ImmersiveView.swift import SwiftUI import RealityKit import RealityKitContent // Using the RealityKitContent module struct ImmersiveView: View { @ObservedObject var viewModel: TrackingViewModel @State private var immersiveScene: Entity? @State private var isToggleOn: Bool = false // Variable for toggle state var body: some View { ZStack { // Overlay RealityView and UI elements RealityView { content in if let scene = immersiveScene { content.add(scene) print("Immersive scene successfully added.") if let moneyGunsEntity = scene.findEntity(named: "MoneyGuns") { NotificationCenter.default.post( name: Notification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "PlayTimeline" ] ) print("PlayTimeline notification sent.") } else { print("MoneyGuns entity not found.") } } } .onAppear { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { immersiveScene = scene } else { print("Failed to load immersive scene.") } } } VStack { Spacer() Toggle(isOn: $isToggleOn) { // Add toggle button Text("Toggle Option") .foregroundColor(.white) } .padding() .background(Color.black.opacity(0.7)) .cornerRadius(8) .padding() } } } }
1
0
292
2w
RoomPlan Framework v2 - Stairs missing
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
2
1
537
Mar ’24
[ARKit] Is it possible remembering certain room using Room Tracking?
Hi! I'm making content using Room Tracking for vision pro these days. So I searched information about it. Here the links I visited. But I could not found the info I wanted to know Apple ARKit Create enhanced spatial computing experiences with ARKit RoomTrackingProvider I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again? For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature. After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room. Is this possible? If there are example codes or projects about it, please let me know.
1
0
320
3w
Geometry recognition and measurement from MeshAnchor
FYI. The source code of the FindSurface demo app for Apple Vision Pro (visionOS) is available now. The Swift package of FindSurface™ library is required to build the source code into the demo app. https://github.com/CurvSurf/FindSurface-visionOS After starting the app, the floating panels (below) will appear on your right side, and you will see wireframe meshes that approximately describe your environments. Performing a spatial tap (pinching with your thumb and index finger) with staring at a location on the meshes will invoke FindSurface, with an indicator (blue disk) appearing on the surface you've gazed. Voice commands: “Tap” – Spatial tap (gazing & pinching). Invoke FindSurface. “Tap plane” – Plane selection. “Tap sphere” or “Tap ball” – Sphere selection. “Tap cylinder” – Cylinder selection. “Tap cone” – Cone selection. “Tap torus” or “Tap donut” – Torus selection. “Tap accuracy” or “Tap measurement accuracy” – Accuracy selection. “Tap mean distance”, “Tap average distance”, or “Tap distance” – Avg. Distance selection. “Tap touch radius” or “Tap seed radius” – Touch Radius selection. “Tap Inlier” – “Show inlier points” toggle. “Tap outline” – “Show geometry outline” toggle. “Tap clear” – “Clear Scene” click.
7
0
901
Jun ’24
Object Occlusion in Non LiDAR devices
Hi, I'm currently working on an ARKit project where I need to implement object occlusion on devices that do not have a LiDAR sensor (e.g., iPhone XR, iPhone 11). I used CoreML models like DepthAnythingV2 to create depth maps and DETRResnet50SemanticSegmentationF16P8 to to perform real-time segmentation. But these models are too heavy for devices. Much appreciated on any advice or pointers to resources.
2
0
329
3w
Proper way of handing opening ImmersiveSpace?
if you check the code here, https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough var body: some Scene { ImmersiveSpace(id: Self.id) { CompositorLayer(configuration: ContentStageConfiguration()) { layerRenderer in let pathCollection: PathCollection do { pathCollection = try PathCollection(layerRenderer: layerRenderer) } catch { fatalError("Failed to create path collection \(error)") } let tintRenderer: TintRenderer do { tintRenderer = try TintRenderer(layerRenderer: layerRenderer) } catch { fatalError("Failed to create tint renderer \(error)") } Task(priority: .high) { @RendererActor in Task { @MainActor in appModel.pathCollection = pathCollection appModel.tintRenderer = tintRenderer } let renderer = try await Renderer(layerRenderer, appModel, pathCollection, tintRenderer) try await renderer.renderLoop() Task { @MainActor in appModel.pathCollection = nil appModel.tintRenderer = nil } } layerRenderer.onSpatialEvent = { pathCollection.addEvents(eventCollection: $0) } } } .immersionStyle(selection: .constant(appModel.immersionStyle), in: .mixed, .full) .upperLimbVisibility(appModel.upperLimbVisibility) the only way it's dealing with the error is fatalError. And don't think I can throw anything or return anything else? Is there a way I can gracefully handle this and show a message box in UI? I was hoping I could somehow trigger a failure and have https://developer.apple.com/documentation/swiftui/openimmersivespaceaction return fail. but couldn't find a nice way to do so. Let me know if you have ideas.
1
0
252
Oct ’24