Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

VisionOS Enterprise API: fail to get cameraFrame in cameraFrameUpdates{}
I am developing an app based on visionOS and need to utilize the main camera access provided by the Enterprise API. I have applied for an enterprise license and added the main camera access capability and the license file in Xcode. In my code, I used await arKitSession.queryAuthorization(for: [.cameraAccess]) to request user permission for camera access. After obtaining the permission, I used arKitSession to run the cameraFrameProvider. However, when running for await cameraFrame in cameraFrameUpdates{ print("hello") guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } pixelBuffer = mainCameraSample.pixelBuffer } , I am unable to receive any frames from the camera, and even print("hello") within the braces do not execute. The app does not crash or throw any errors. Here is my full code: import SwiftUI import ARKit struct cameraTestView: View { @State var pixelBuffer: CVPixelBuffer? var body: some View { VStack{ Button(action:{ Task { await loadCameraFeed() } }){ Text("test") } if let pixelBuffer = pixelBuffer { let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext(options: nil) if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) { Image(uiImage: UIImage(cgImage: cgImage)) } }else{ Image("exampleCase") .resizable() .scaledToFill() .frame(width: 400,height: 400) } } } func loadCameraFeed() async { // Main Camera Feed Access Example let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() let arKitSession = ARKitSession() // main camera feed access example var cameraAuthorization = await arKitSession.queryAuthorization(for: [.cameraAccess]) guard cameraAuthorization == [ARKitSession.AuthorizationType.cameraAccess:ARKitSession.AuthorizationStatus.allowed] else { return } do { try await arKitSession.run([cameraFrameProvider]) } catch { return } let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) if cameraFrameUpdates != nil { print("identify cameraFrameUpdates") } else{ print("fail to get cameraFrameUpdates") return } for await cameraFrame in cameraFrameUpdates! { print("hello") guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } pixelBuffer = mainCameraSample.pixelBuffer } } } #Preview(windowStyle: .automatic) { cameraTestView() } When I click the button, the console prints: identify cameraFrameUpdates It seems like it stuck in getting cameraFrame from cameraFrameUpdates. Occurring on VisionOS 2.0 Beta (just updated), Xcode 16 Beta 6 (just updated). Does anyone have a workaround for this? I would be grateful if anyone can help.
2
1
495
Aug ’24
Placing 3D object in video
Is there any way to place 3D objects, maybe using ARKit or Metalkit in the video. I have tried to extract frames from video, then draw a cube using SCNNode and then render it into UIImage, then gather all images and create video. But this is not feasible solution as it creates huge memory spike and ultimately gives memory warning. So is there any other way to draw 3D objects on the video file.
1
0
346
Aug ’24
Object Capture API crash frequently when start generating model
I have updated the sample code so that the scan will start generating when 15 photos r captured. I hope I can catch this error so the app wont crash.... really need help on this and thank you in advanced ! Hardware Model: iPhone14,2 OS Version: iPhone OS 17.6.1 (21G93) Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x000000023363518c Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [525] Triggered by Thread: 0 Thread 0 name: Thread 0 Crashed: 0 RealityKit_SwiftUI 0x000000023363518c CoveragePointCloudMiniView.interfaceOrientation.getter + 508 (CoveragePointCloudMiniView.swift:0) 1 RealityKit_SwiftUI 0x0000000233634cdc closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 124 (CoveragePointCloudMiniView.swift:75) 2 RealityKit_SwiftUI 0x000000023363db9c partial apply for closure #1 in closure #2 in CoveragePointCloudMiniView.body.getter + 20 (:0) 3 SwiftUI 0x0000000195c4bbac closure #1 in withTransaction(::) + 276 (Transaction.swift:243) 4 SwiftUI 0x0000000195c4ba90 partial apply for closure #1 in withTransaction(::) + 24 (:0) 5 libswiftCore.dylib 0x00000001903f8094 withExtendedLifetime<A, B>(::) + 28 (LifetimeManager.swift:27) 6 SwiftUI 0x0000000195b17d78 withTransaction(::) + 72 (Transaction.swift:228) 7 SwiftUI 0x0000000195b17d04 withAnimation(::) + 116 (Transaction.swift:280) 8 RealityKit_SwiftUI 0x0000000233634bfc closure #2 in CoveragePointCloudMiniView.body.getter + 664 (CoveragePointCloudMiniView.swift:73) 9 SwiftUI 0x0000000195bef134 closure #1 in closure #1 in SubscriptionView.Subscriber.updateValue() + 72 (SubscriptionView.swift:66) 10 SwiftUI 0x0000000195b3f57c thunk for @escaping @callee_guaranteed () -> () + 28 (:0) 11 SwiftUI 0x0000000195b3c864 static Update.dispatchActions() + 1140 (Update.swift:151) 12 SwiftUI 0x0000000195b3bedc static Update.end() + 144 (Update.swift:58) 13 SwiftUI 0x0000000195a691fc closure #1 in SubscriptionView.Subscriber.updateValue() + 700 (SubscriptionView.swift:66) 14 SwiftUI 0x0000000195a68eb0 partial apply for thunk for @escaping @callee_guaranteed (@in_guaranteed A.Publisher.Output) -> () + 28 (:0) 15 SwiftUI 0x0000000195a68e78 closure #1 in ActionDispatcherSubscriber.respond(to:) + 76 (SubscriptionView.swift:98) 16 SwiftUI 0x0000000195a68c80 ActionDispatcherSubscriber.respond(to:) + 816 (SubscriptionView.swift:97) 17 SwiftUI 0x0000000195a68938 ActionDispatcherSubscriber.receive(:) + 16 (SubscriptionView.swift:110) 18 SwiftUI 0x0000000195a6786c SubscriptionLifetime.Connection.receive(:) + 100 (SubscriptionLifetime.swift:195) 19 Combine 0x000000019aed29d4 Publishers.Autoconnect.Inner.receive(:) + 52 (Autoconnect.swift:142) 20 Combine 0x000000019aed2928 Publishers.Multicast.Inner.receive(:) + 244 (Multicast.swift:211) 21 Combine 0x000000019aed2828 protocol witness for Subscriber.receive(_:) in conformance Publishers.Multicast<A, B>.Inner + 24 (:0) .... (FBSScene.m:812) 46 FrontBoardServices 0x00000001aa892844 __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke_2 + 152 (FBSWorkspaceScenesClient.m:692) 47 FrontBoardServices 0x00000001aa8926cc -[FBSWorkspace _calloutQueue_executeCalloutFromSource:withBlock:] + 168 (FBSWorkspace.m:411) 48 FrontBoardServices 0x00000001aa8977fc __94-[FBSWorkspaceScenesClient _queue_updateScene:withSettings:diff:transitionContext:completion:]_block_invoke + 344 (FBSWorkspaceScenesClient.m:691) 49 libdispatch.dylib 0x00000001999aedd4 _dispatch_client_callout + 20 (object.m:576) 50 libdispatch.dylib 0x00000001999b286c _dispatch_block_invoke_direct + 288 (queue.c:511) 51 FrontBoardServices 0x00000001aa893d58 FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK + 52 (FBSSerialQueue.m:285) 52 FrontBoardServices 0x00000001aa893cd8 -[FBSMainRunLoopSerialQueue _targetQueue_performNextIfPossible] + 240 (FBSSerialQueue.m:309) 53 FrontBoardServices 0x00000001aa893bb0 -[FBSMainRunLoopSerialQueue performNextFromRunLoopSource] + 28 (FBSSerialQueue.m:322) 54 CoreFoundation 0x0000000191adb834 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 (CFRunLoop.c:1957) 55 CoreFoundation 0x0000000191adb7c8 __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2001) 56 CoreFoundation 0x0000000191ad92f8 __CFRunLoopDoSources0 + 340 (CFRunLoop.c:2046) 57 CoreFoundation 0x0000000191ad8484 __CFRunLoopRun + 828 (CFRunLoop.c:2955) 58 CoreFoundation 0x0000000191ad7cd8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420) 59 GraphicsServices 0x00000001d65251a8 GSEventRunModal + 164 (GSEvent.c:2196) 60 UIKitCore 0x0000000194111ae8 -[UIApplication run] + 888 (UIApplication.m:3713) 61 UIKitCore 0x00000001941c5d98 UIApplicationMain + 340 (UIApplication.m:5303) 62 SwiftUI 0x0000000195ccc294 closure #1 in KitRendererCommon(:) + 168 (UIKitApp.swift:51) 63 SwiftUI 0x0000000195c78860 runApp(:) + 152 (UIKitApp.swift:14) 64 SwiftUI 0x0000000195c8461c static App.main() + 132 (App.swift:114) 65 SoleFit 0x0000000103046cd4 static SoleFitApp.$main() + 24 (SoleFitApp.swift:0) 66 SoleFit 0x0000000103046cd4 main + 36 67 dyld 0x00000001b52af154 start + 2356 (dyldMain.cpp:1298)
1
0
322
Aug ’24
RealityView world tracking without camera feed?
Is it possible with iOS 18 to use RealityView with world tracking but without the camera feed as background? With content.camera = .worldTracking the background is always the camera feed, and with content.camera = .virtual the device's position and orientation don't affect the view point. Is there a way to make a mixture of both? My use case is that my app "Encyclopedia GalacticAR" shows astronomical objects and a skybox (a huge sphere), like a VR view of planets, as you can see in the left image. Now that iOS 18 offers RealityView for iOS and iPadOS, I would like to make use of it, but I haven't found a way to display my skybox as environment, instead of the camera feed. I filed the suggestion FB14734105 but hope that somebody knows a workaround...
4
1
640
Aug ’24
Collision Detection after Object Tracking
So I am tracking 2 objects in my scene, and spawning a tiny arrow on each of the objects (this part is working as intended). Inside my scene I have added Collision Components and Physics Body Components to each of the arrows. I want to detect when when a collision occurs between the 2 arrow entities.. I have made the collision boxes big enough so they should definitely be overlapping, however I am not able to detect when the Collision occurs. This is the code that I use for the scene - import SwiftUI import RealityKit import RealityKitContent struct DualObjectTrackingTest: View { @State private var subscription: EventSubscription? var body: some View { RealityView { content in if let immersiveContentEntity = try? await Entity(named: "SceneFind.usda", in: realityKitContentBundle) { content.add(immersiveContentEntity) print("Collision check started") } } update: { content in if let arrow = content.entities.first?.findEntity(named: "WhiteArrow") as? ModelEntity { let subscription = content.subscribe(to: CollisionEvents.Began.self, on: arrow) { collisionEvent in print("Collision has occured") } } } } } All I see in my console logs is "Collision check started" and then whenever I move the 2 objects really close to each other so as to overlap the collision boxes, I don't see any updates in the logs. Can anyone give me some further guidance/resources on this? Thanks again!
5
0
532
Aug ’24
GameKit Access Point causes camera background image in ARKit to be black in iOS 18 beta only
I have an AR game using ARKit with SceneKit that works just fine in iOS 17. In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless. I narrowed down the issue to showing the Game Center Access Point. My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene. Following is the code I use to start Game Center in VC1: GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in let isGameCenterReady = (gcAuthVC == nil) && (error == nil) if let viewController = gcAuthVC { self.present (viewController, animated: true, completion: nil) } if error != nil { print(error?.localizedDescription ?? "") } if isGameCenterReady { GKAccessPoint.shared.location = .topLeading GKAccessPoint.shared.showHighlights = true GKAccessPoint.shared.isActive = true } } When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black. If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows: If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem. So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists. Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly! I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code. But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe??? I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well). So could this be a bug in iOS 18? As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience. Any help would be greatly appreciated. Thanks!
2
0
490
Aug ’24
Entity Coordinates in Object Tracking
A second post on the same topic, as I feel I may have over complicated the earlier one. I essentially am performing object tracking inside Reality Composer Pro and adding a digital entity to the tracked object. I now want to get the coordinates of this digital entity inside Xcode.. Secondly, can I track more than 1 object inside the same scene? For example if I want to find a spanner and a screwdriver amongst a bunch of tools laid out on the table, and spawn an arrow on top of the spanner and the screwdriver, and then get the coordinates of the arrows that I spawn, how can I go about this?
3
0
408
Aug ’24
Tracked object coordinates in program
Hey, as a follow up to my earlier posts about object tracking on visionOS 2 - I'm doing some experimentation, and my use-case/requirements require me to track the coordinates of some digital entity that I attach (relative to my reference object) to my reference object. Can something like this be done? Right now, all I'm doing is putting my reference object in my scene, and then positioning the 3D content that I want to show at the corresponding locations on the reference object. I am then loading the scene in a RealityView block via my SwiftUI code. I want to know now if I can also extract and use the coordinates of the digital entity that I have placed (post object-tracking), and then make some manipulations via code, for example, if the physical coordinates of the digital entity is in a certain x,y,z range -> trigger this function/bring up this alert message in a tile.. Is something like this possible, and if so, can you help me with understanding different aspects to this problem via code with some sample/reference code? So far I've only done most of the object tracking related tasks via the Reality Composer Pro, but this task that I'm trying to implement will require me to do quite a bit of programming as well, and I'm kinda lost as to how to start and go about this. Thanks for any help that ya'll can give me!
1
0
295
Aug ’24
RoomPlan AR Frame Position wrong after combining Rooms
Hello everyone, I am struggling to find a solution for the following problem, and I would be glad and thankful if anyone can help me. My Use Case: I am using RoomPlan to scan a room. While scanning, there is a function to take pictures. The position from where the pictures are taken will be saved (in my app, they are called "points of interest" = POI). This works fine for a single room, but when adding a new room and combining the two of them using: structureBuilder.capturedStructure(from: capturedRooms) the first room will be transformed and thus moved around to fit in the world space. The points are not transformed with the rest of the room since they are not in the rooms structure specifically, which is fine, but how can I transform the POIs too, so that they are in the correct positions where they were taken? I used: func captureSession(_ session: RoomCaptureSession, didEndWith data: CapturedRoomData, error: (Error)?) to get the transform matrix from "arFrameReferenceOriginTransform" and apply this to the POIs, but it still seems that this is not enough. I would be happy for any tips and help! Thanks in advance! My Update function: func updatePOIPositions(with originTransform: simd_float4x4) { for i in 0..<(poisOldRooms.count) { var poi = poisOldRooms[i] let originalPosition = SIMD4<Float>( poi.data.cameraOriginX, poi.data.cameraOriginY, poi.data.cameraOriginZ, 1.0 ) let updatedTransform = originTransform * originalPosition poisOldRooms[i].data.cameraX = updatedTransform.x poisOldRooms[i].data.cameraY = updatedTransform.y poisOldRooms[i].data.cameraZ = updatedTransform.z } }
3
2
520
Sep ’24
Can we use the ARKit CameraFrameProvider API for prototyping
Its my understanding that to use the CameraFrameProvider, which provides access to the Apple Vision Pro front facing camera feed the enterprise main camera access "com.apple.developer.arkit.main-camera-access.allow" entitlement is required. Is there a method to prototype apps on a that use the CameraFrameProvider running on an apple vision pro that has developer mode enable without having the "com.apple.developer.arkit.main-camera-access.allow" entitlement?
1
0
360
Jul ’24
Can I use the photokit sample on Vision OS?
I am a student developer We are trying to implement an application that allows you to take photos in visionOS mr mode and access the photos you took. Can the contents of the link below be used on visionOS? https://developer.apple.com/tutorials/sample-apps/capturingphotos-captureandsave/ I would really appreciate your reply. For reference, we plan to package the methods in swift and import the framework into Unity to use them.
0
0
335
Jul ’24
Getting main camera frame using CameraFrameProvider
Hello, I am trying to use the new Enterprise API to capture main camera frames using the CameraFrameProvider. Until now, I could not make it work. I followed the sample code provided in this thread (literally copy past it): https://forums.developer.apple.com/forums/thread/758364. When I run the application on the Vision Pro, no frame is captured. I get a message in the XCode's console that no entitlement is found. However, the entitlement is created and the license file is also in the project. Besides, all authorization keys are added in the plist file. What I am missing? How to know if the license file is wrong? Thank you.
2
0
435
Jul ’24
Convert CGPoint into SCNVector3
I want to convert CGPoint into SCNVector3. I am using ARFaceTrackingConfiguration for face tracking. Below is my code to convert SCNVector3 to CGPoint let point = faceAnchor.verticeAndProjection(to: sceneView, facePoint: faceAnchor.geometry.vertices[0]) print(point, faceAnchor.geometry.vertices[0]) which prints below values CGPoint = (350.564453125, 643.4456787109375) SIMD3<Float>(0.014480735, 0.01397189, 0.04508282) extension ARFaceAnchor{ // struct to store the 3d vertex and the 2d projection point struct VerticesAndProjection { var vertex: SIMD3<Float> var projected: CGPoint } // return a struct with vertices and projection func verticeAndProjection(to view: ARSCNView, facePoint: Int) -> CGPoint{ let point = SCNVector3(geometry.vertices[facePoint]) let col = SIMD4<Float>(SCNVector4()) let pos = SIMD4<Float>(SCNVector4(point.x, point.y, point.z, 1)) let pworld = transform * simd_float4x4(col, col, col, pos) let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z)) let p = CGPoint(x: CGFloat(vect.x), y: CGFloat(vect.y)) return p } } extension matrix_float4x4 { /// Get the position of the transform matrix. public var position: SCNVector3 { get{ return SCNVector3(self[3][0], self[3][1], self[3][2]) } } } Now i want to convert same CGPoint to SCNVector3. I tried using below code but it is not giving expected values, which is SIMD3(0.014480735, 0.01397189, 0.04508282) let projectedOrigin = sceneView.projectPoint(SCNVector3Zero) let unproject = sceneView.unprojectPoint(SCNVector3(point.x, point.y, CGFloat(projectedOrigin.z))) let vector = SCNVector3(unproject.x, unproject.y, unproject.z) Is there any way to convert CGPoint to SCNVector3? I cannot use hitTest because this CGPoint is not present on the node. It is present somewhere on the face area.
3
0
476
Jul ’24
RealityKit scene with the Entity Component System
I'm following WWDC for interactive 3D content in reality composer pro and apple's document https://developer.apple.com/wwdc24/10102 https://developer.apple.com/documentation/realitykit/implementing-systems-for-entities-in-a-scene#Retrieve-entities-with-an-entity-query However, this simple code to declare a dummy Component and System has compile error /Users/Workspaces/repository/Packages/RealityKitContent/Sources/RealityKitContent/RobotComponent.swift:18:24 Static property 'query' is not concurrency-safe because non-'Sendable' type 'EntityQuery' may have shared mutable state // Define a query to return all entities with a MyComponent. private static let query = EntityQuery(where: .has(MyComponent.self)) // Initializer is required. Use an empty implementation if there's no setup needed. required init(scene: Scene) { } // Iterate through all entities containing a MyComponent. func update(context: SceneUpdateContext) { for entity in context.entities( matching: Self.query, updatingSystemWhen: .rendering ) { // Make per-update changes to each entity here. } } } I'm using XCode beta3 and project target visionos 2
1
0
466
Jul ’24
ArKit to capture data
ARKit to capture data What we want to do : use the ARKit to capture data around an object (pictures). Is there a way to : Increase the number of picture captured by default (120) to a higher number without increase the time required to capture data ? We managed to increase the number of pictures to 1000, but the data capture now lasts 20minutes, which is too long. Is there a way to capture a video instead of pictures ? Capture IMU data : how can we use the ARKit to capture IMU data around an object ?
4
0
386
Jul ’24