Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Object Tracking with RealtyView
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code: RealityView { content in if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(model) } } Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
2
0
563
Jul ’24
How to Detect Gaze & Gesture on Entity
It's a common system interaction to look at an item in SwiftUI and tap to select it. I'm confused how to do the same with ModelEntities. How do I use gaze to select a ModelEntity for context based actions? e.g. look at the green sphere and tap to pull up a menu. Or look in a direction and clap to **** away virtual objects etc. etc. If this is not possible is there a workaround?
1
0
502
Jun ’24
Disabling New Hand Gesture Features in Vision Pro App on visionOS 2
Question: Hi everyone, I'm developing a Vision Pro app using the latest visionOS 2, and I've encountered some issues with the new hand gestures introduced in this update. My app is designed to display a UI element when a user's palm is detected. However, the new hand gestures for navigating key functions like Home View, Control Center, and adjusting the volume are interfering with my app's functionality. What I'm Trying to Achieve Detect when a user's palm is open and display a UI element. Ensure that my app's custom hand gestures are not disturbed by the new default gestures in visionOS 2. Problem The new hand gestures in visionOS 2 (such as those for Home View, Control Center, and volume adjustment) are activating while my app is open, causing disruptions to my app's functionality. I want to disable these system-level gestures when my app is running.
3
2
1.3k
Sep ’24
Object anchor not working with ARKit in iOS
With WWDC 24, I was excited to see that apple is bringing their APIs from Vision OS to iOS. I tried using the Object Anchoring component in Reality Composer Pro. Which this works with a Vision Pro, it looks like the entity will spawn at origin if we run the same on iOS and the object anchoring doesn't seem to work. Is this intended? Below is how I'm doing this. I added an Anchoring component and added the .referenceObject file I trained using CreateML. This is the code I'm using to load this scene in. // GrootView.swift // ARTest-New // // Created by Sravan Karuturi on 6/10/24. // import SwiftUI import RealityKit import Box struct GrootView: View { @StateObject private var grootVM = GrootViewModel() @State private var ent: Entity? = nil @State var anchor: Entity? = nil @State var wallAnchor: Entity? = nil @State var floorAnchor: Entity? = nil var body: some View { RealityView{ content in #if os(iOS) await content.setupWorldTracking() content.camera = .worldTracking #endif ent = try? await Entity(named: "Box", in: boxBundle) print(ent?.children) anchor = ent?.findEntity(named: "ObjectAnchor") wallAnchor = ent?.findEntity(named: "WallAnchor") floorAnchor = ent?.findEntity(named: "FloorAnchor") let updateSum = content.subscribe(to: SceneEvents.Update.self){ event in if let anc = anchor, anc.isAnchored { print("Found Item") } if let anc = floorAnchor, anc.isAnchored { print("Found Floor") } if let anc = wallAnchor, anc.isAnchored { print("Wall Anchor") } } content.add(ent!) } } } #Preview { GrootView() } While, something similar seems to work on visionOS, the same doesn't seem to work with iOS. When I run this app, we see all the children and the Found Item is printed constantly even when we don't have the item in the scene. Not really sure if this is just not supported yet on iOS ( I really hope that's not the case ) or if I messed up something somehow
2
1
588
Jun ’24
Questions about WorldTrackedAnchor Resiliency
Background: The app that I am working on lets the user place things in their surroundings and recovers those placements the next time their enter the immersive scene. From the documentation and discussions I have had, World Tracked Anchors are local to the device. My questions are: What happens to these anchors when the user updates their device to the next generation? What happens to these anchors if the user gets an Apple Care replacement? Are they backed up and restored via iCloud? If not, I filed a feedback about it a few months back :D FB13613066
1
0
646
Jun ’24
Turn on ARView.ARSession.ARConfiguration.providesAudioData = true and add a ModelEntity to ARView (its Material is VideoMaterial (avPlayer: player) this video contains audio), and the video does not play properly?
import SwiftUI import RealityKit import ARKit import AVFoundation struct ContentView : View { var body: some View { ARViewContainer().edgesIgnoringSafeArea(.all) } } struct ARViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.session.delegate = context.coordinator let worldConfig = ARWorldTrackingConfiguration() worldConfig.planeDetection = .horizontal // worldConfig.providesAudioData = true // open here -----> Error: arView.session.run(worldConfig) addTestEntity(arView: arView) return arView } func updateUIView(_ uiView: ARView, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator() } class Coordinator: NSObject, ARSessionDelegate, ARSessionObserver { func session(_ session: ARSession, didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer) { } } } func addTestEntity(arView: ARView) { let mesh = MeshResource.generatePlane(width: 0.5, depth: 0.35) guard let url = Bundle.main.url(forResource: "videoplayback", withExtension: "mp4") else { return } let player = AVPlayer(url: url) let videoMaterial = VideoMaterial(avPlayer: player) let model = ModelEntity(mesh: mesh, materials: [videoMaterial]) model.transform.translation.y = 0.05 let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) player.play() arView.scene.anchors.append(anchor) } Error: failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change} failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change} ...... ARSession <0x125d88040>: did fail with error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." UserInfo={NSLocalizedFailureReason=A sensor failed to deliver the required input., NSUnderlyingError=0x302922dc0 {Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}}, NSLocalizedRecoverySuggestion=Make sure that the application has the required privacy settings., NSLocalizedDescription=Required sensor failed.} iOS 17.5.1 Xcode 15.4
2
0
559
Jun ’24
How do I use RoomAnchor?
What I want to do: I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces. This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information. What I've explored: A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room. I can use this mesh information to create RealityKit entities and add them to my immersive view. But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated. As such I just keep adding duplicate wall entities. A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far. Here is how I add the green walls from the RoomAnchor wall meshes. Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though. Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D. func updateRoom(_ anchor: RoomAnchor) async throws { print("ROOM ID: \(anchor.id)") anchor.geometries(of: .wall).forEach { mesh in Task { let newEntity = Entity() newEntity.components.set(InputTargetComponent()) realityViewContent?.addEntity(newEntity) newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent()) collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2)) collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform) // Generate a mesh for the plane do { let contents = MeshResource.Contents(planeGeometry: mesh) let meshResource = try MeshResource.generate(from: contents) // Make this plane occlude virtual objects behind it. // entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()])) collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)])) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.faces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { let collisionGroup = PlaneAnchor.verticalCollisionGroup collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true, filter: CollisionFilter(group: collisionGroup, mask: .all))) // The plane needs to be a static physics body so that objects come to rest on the plane. let physicsMaterial = PhysicsMaterialResource.generate() let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static) collisionEntities[anchor.id]?.components.set(physics) } collisionEntities[anchor.id]?.components.set(InputTargetComponent()) } } }
1
0
638
Jun ’24
Sample Project for WWDC24 10092 Metal with Passthrough?
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS. https://developer.apple.com/wwdc24/10092 This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process. Thank you for introducing this feature!
3
1
754
1w
RealityKit on iOS: New anchor entity takes ages to show up
I'm implementing an AR app with Image Tracking capabilities. I noticed that it takes very long for the entities I want to overlay on a detected image to show up in the video feed. When debugging using debugOptions.insert(.showAnchorOrigins), I realized that the image is actually detected very quickly, the anchor origins show up almost immediately. And I can also see that my code reacts with adding new anchors for my ModelEntities there. However, it takes ages for these ModelEntities to actually show up. Only if I move the camera a lot, they will appear after a while. What might be the reason for this behaviour? I also noticed that for the first image target, a huge amount of anchors are being created. They start from the image and go all up towards the user. This does not happen with subsequent (other) image targets.
1
0
400
Jun ’24
Researcher in Spatial Computing / HCI Looking to Use Enterprise APIs on Vision Pro for HCI Research-Only.
I am a spatial computing / XR and Human-Computer Interaction researcher from a private university. I am interested in using the vision pro's newly-exposed camera access to develop and evaluate new algorithms for computational perception. ( WWDC session here: https://developer.apple.com/wwdc24/10139 ) I understand this is targeted at large enterprises, but I would like to know if by some means as a researcher affiliated with an educational institution I could develop private for-development-only applications for the vision pro with the enterprise APIs enabled. The intent is not to publish apps, but rather to contribute to the research community through R&D. However, to my knowledge, I would be ineligible as a normal "business" as I do not employee 100+ employees. I am an independent researcher, and on occasion, I collaborate within small research groups within my university that focus on this kind of camera-based perception algorithm development. Could someone from Apple comment? Thank you.
10
1
1.4k
Jun ’24
Trouble loading a reference object from Asset Catalog
Hi, I'm trying to test object recognition using ARKit. I scanned a couple of objects using the Apple demo app, copied the .arobject files to my laptop. I added them to my new project in Assets like shown in the image. However, as I follow the tutorial to load these objects to use as reference objects, I run into an error. let configuration = ARWorldTrackingConfiguration() guard let referenceObjects = ARReferenceObject.referenceObjects(inGroupNamed: "Test", bundle: Bundle.main) else { let ro = ARReferenceObject.referenceObjects(inGroupNamed: "Test", bundle: nil); let ro1 = ARReferenceObject.referenceObjects(inGroupNamed: "Gallery", bundle: .main); fatalError("Resource not found") } Here, we fail the guard statement and the ro and ro1 both are set to nil. I created a new project with just this one statement and that fails too. I'm using SwiftUI instead of UIKit if that makes a difference and am calling this in the makeUIView() function . Any pointers to what I might be doing wrong here are appreciated.
1
1
418
Jun ’24
RealityView and Anchors
Hello, In the documentation for an ARView we see a diagram that shows that all Entity's are connected via AnchorEntity's to the Scene: https://developer.apple.com/documentation/realitykit/arview What happens when we are using a RealityView? Here the documentation suggests we direclty add Entity's: https://developer.apple.com/documentation/realitykit/realityview/ Three questions: Do we need to add Entitys to an AnchorEntity first and add this via content.add(...)? Is an Entity ignored by physics engine if attached via an Anchor? If both the AnchorEntity and an attached Entity is added via content.add(...), does is its anchor's position ignored?
1
0
843
May ’24
coordinate system in shareplay + spatial persona experience
I cannot figure out how shareplay + spatial persona place the origin of RealityKit's coordinate system I have an App on visionOS with immersiveSpace(.mix) scene. In the scene I am using ARKit to track my hand, creating a virtual object following the movement of my hand palm. Every frame I query positions from the HandAnchor to update the position of my object, using originFromAnchorTransform to correctly place my object in the scene. However when I try to adopt that into a shareplay experience with spatial persona, the virtual object's position update becomes a mess. With different template (.sidebyside or .conversational), the origin in my space appears with no pattern. I can always see that the virtual object don't follow my hand but in a random place. it seems that there was a differnce/transform between handAnchor's origin and immersiveSpace origin under stpatial persona + shareplay mode. isn't it? Or is there something I can try to use convert(displacement.inverse.rotation, from: .immersiveSpace, to: .scene) that mentioned here: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting and https://developer.apple.com/documentation/swiftui/environmentvalues/immersivespacedisplacement to do the translation and apply it on my virtual object. But not working yet. Can someone tells me how to do this correctly?
2
0
773
May ’24
SceneReconstructionProvider stops providing updates
I have found that my Vision Pro device can get into a state where my app is no longer receiving fresh SceneReconstructionProvider updates. It reports that the SceneReconstructionProvider goes into the DataProviderState.running state, and .anchorUpdates will report a set of stale mesh anchors when first fired up, but does not produce any further updates. Once the device gets into this state, I can force quit the app, and even uninstall and re-install it, and I get the same few mesh updates, but no fresh updates until I restart the device. Sample async function below. I can confirm that print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") never gets executed, so it stays inside the sceneReconstruction.anchorUpdates loop. let session = ARKitSession() var handTracking = HandTrackingProvider() let sceneReconstruction = SceneReconstructionProvider() let planeDetection = PlaneDetectionProvider(alignments: [.horizontal, .vertical]) let worldTracking = WorldTrackingProvider() ... func start() async { do { await requestAuth() if dataProvidersAreSupported && isReadyToRun && !isRunning { // print("ARKitSession starting.") try await session.run([sceneReconstruction, handTracking, planeDetection, worldTracking]) startCount += 1 // TODO: Fail gracefully if we have to attempt start too many (# TBD) times } else { print("dataProvidersAreSupported: \(dataProvidersAreSupported). isReadyToRun: \(isRunning)") print("handTracking.state: \(handTracking.state), sceneReconstruction.state: \(sceneReconstruction.state) worldTracking.state: \(worldTracking.state), planeDetection.state; \(planeDetection.state)") } }catch { print("ARKitSession error:", error) } } ... func processReconstructionUpdates() async { while (true) { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = try! await generateModelEntity(geometry: meshAnchor.geometry) entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.name = "mesh" entity.physicsBody = PhysicsBodyComponent(mode: .static) let sortComponent = ModelSortGroupComponent(group: modelSortGroup, order: 1) entity.components.set(sortComponent) entity.components.set(OpacityComponent(opacity: 0.5)) meshEntities[meshAnchor.id] = entity meshesParent.addChild(entity, preservingWorldTransform: true) case .updated: guard let entity = meshEntities[meshAnchor.id], let updatedEntity = try? await generateModelEntity(geometry: meshAnchor.geometry) else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] if let newMesh = updatedEntity.model?.mesh { entity.model?.mesh = newMesh } case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } print("We now have '\(meshEntities.count)' mesh entities") } print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") try? await Task.sleep(nanoseconds: 1_000_000) }
5
0
572
May ’24