Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit tag

344 Posts
Sort by:
Post not yet marked as solved
0 Replies
217 Views
Hi, i want to place a object in 3d world space without the use of hittest or plane detection in ios swift code. Suggest the best method. Now, I take the camera center matrix and use simd_mul to place the object, it works but the object gets placed at the centre of the mobile screen. I want to select the x and y positino on the screen 2d coordinate and place the object. I tried using the unprojectpoint function, to get the AR scene world coordinate of the point i touch on the mobile screen. I get the x, y,z values, they are very close to the values from camera center matrix. When i try to replace the unprojectpoint values in the cameracenter matrix, i dont see a difference in the location of the placed object. The below code always place object from center screen with specified depth, But i need to place object in user specified position(x,y) of the screen with depth. 2D pixel coordinate system of the renderer to the 3D world coordinate system of the scene. /* Create a transform with a translation of 0.2 meters in front of the camera. */ var translation = matrix_identity_float4x4 translation.columns.3.z = -0.2 let transform = simd_mul(view.session.currentFrame.camera.transform, translation) Refer from : [https://developer.apple.com/documentation/arkit/arskview/providing_2d_virtual_content_with_spritekit) The code i used for replacing the camera center matrix with the unprojectpoint is let vpWithZ = SCNVector3(x: 100.0, y: 100.0, z: -1.0) let worldPoint = sceneView.unprojectPoint(vpWithZ) var translation = matrix_identity_float4x4 translation.columns.3.z = Float(Depth) var translation2 = sceneView.session.currentFrame!.camera.transform translation2.columns.3.x = worldPoint.x translation2.columns.3.y = worldPoint.y translation2.columns.3.z = worldPoint.z let new_transform = simd_mul(translation2, translation) /* add object name you wanted in your project*/ let sphere = SCNSphere(radius: 0.03) let objectNode = SCNNode(geometry: sphere) objectNode.position = SCNVector3(x: transform.columns.3.x, y: transform.columns.3.y, z: transform.columns.3.z) The below image shows outline of my idea.
Posted Last updated
.
Post not yet marked as solved
1 Replies
351 Views
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis. Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen. Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above. There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample. I am using the following stack: Xcode - Latest; Swiftui; latest os in mac mini and iphone
Posted
by shohandot.
Last updated
.
Post not yet marked as solved
1 Replies
192 Views
I'm developing a motion tracking app that takes requires a real-time view of an iPhone camera to capture the person's body. The motion is mapped to a virtual body. Currently this appears overlayed on the person that the iPhone sees. However, I want to transmit this real time 3D virtual body to a different Apple device, as an AR app, that the other user can place in their environment. Any suggestions on how I can get this 3d model to be viewable by another user (and maintain live updating based on motion tracking)?
Posted Last updated
.
Post not yet marked as solved
1 Replies
209 Views
When running a modified version of the RoomPlan Demo I get frequent Session Interrupted conditions. In looking at the traces I find a status of SensorDidPause in the interruption Side of the error but am mystified as to how to determine which sensor it was that paused and how to diagnose it. It appears there is a bitmap of available and active sensor devices in the sensor info passed with the session data on the error. In looking at the error status I can see that one or two of the motion sensors have had a problem. How do I do further diagnostic checks on what the cause of the error is? I am also curious why the error occurred as soon as the AR Session for my test started via the “session.run” call. The documentation in this area seems difficult to find. Attached are traces from running the test and stack dumps for the calls. Please send me guidance on how to proceed. The device in question is an iPad iPhone(3) that is attached to the Mac mini named “Hawkeye”. There is no known direct involvement for the Hawkeye system
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
0 Replies
224 Views
I have a RealityKit based app in TestFlight and I see the following crash happening twice. It appears to be coming from the RealityKit framework itself in cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect has anyone seen this before and have you discovered what is causing it? Thread 32 Crashed: 0 libsystem_kernel.dylib 0x00000001cfd81fbc __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001f271f680 pthread_kill + 268 (pthread.c:1681) 2 libsystem_c.dylib 0x000000019069ab90 abort + 180 (abort.c:118) 3 Recon3D 0x0000000211b8cd7c cv3d::acv::surfacedetection::DepthMapPlaneDetector::detect(cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u>, float const*>, cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u... + 6136 (DepthMapPlaneDetector.cpp:346) 4 Recon3D 0x0000000211bb0fe4 cv3d::acv::surfacedetection::SurfaceDetector::detectAndTrack(cv3d::acv::surfacedetection::SurfaceDetector::DetectAndTrackWithDepthParams const&) + 844 (SurfaceDetector.cpp:635) 5 Recon3D 0x000000021142fd24 cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect(cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle const&) + 2672 (SurfaceDetection.cpp:645) 6 Recon3D 0x00000002114678ec cv3d::kit::concurrency::detail::ProcessorInputMessageHandlingStrategy<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd::Surf... + 92 (ProcessorInputMessageHandlingStrategy.h:136) 7 Recon3D 0x00000002114675b4 std::__1::__function::__func<void cv3d::kit::concurrency::detail::Processor<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd... + 184 (function.h:356) 8 Recon3D 0x0000000211794330 void std::__1::__invoke_void_return_wrapper<void, true>::__call<std::__1::future<void> cv3d::esn::thread::IWorkQueue::DispatchAsync<void>(std::__1::function<void ()>&&)::'lambda'()&>(std::__1::futu... + 68 (invoke.h:487) 9 Recon3D 0x0000000212387830 dispatch_async_C_CallBack + 76 (GrandCentralDispatchUtil.cpp:94) 10 libdispatch.dylib 0x00000001905e2300 _dispatch_client_callout + 20 (object.m:561) 11 libdispatch.dylib 0x00000001905e9964 _dispatch_lane_serial_drain + 956 (queue.c:3885) 12 libdispatch.dylib 0x00000001905ea3f8 _dispatch_lane_invoke + 432 (queue.c:3976) 13 libdispatch.dylib 0x00000001905eb6a8 _dispatch_workloop_invoke + 1756 (queue.c:4485) 14 libdispatch.dylib 0x00000001905f5004 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913) 15 libdispatch.dylib 0x00000001905f4878 _dispatch_workloop_worker_thread + 404 (queue.c:6507) 16 libsystem_pthread.dylib 0x00000001f271b964 _pthread_wqthread + 288 (pthread.c:2629) 17 libsystem_pthread.dylib 0x00000001f271ba04 start_wqthread + 8 (:-1)
Posted Last updated
.
Post not yet marked as solved
1 Replies
173 Views
Hello there, Do you know what happens if I call one of the following but the Joint is not tracked? var anchorFromJointTransform: simd_float4x4 The position and orientation of this joint relative to the base joint of the skeleton. var parentFromJointTransform: simd_float4x4 The transform from the joint to its parent joint’s coordinate system.
Posted
by kentvchr.
Last updated
.
Post not yet marked as solved
1 Replies
190 Views
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
Posted
by Ramneet.
Last updated
.
Post not yet marked as solved
3 Replies
440 Views
When i call queryDeviceAnchor in my Billboard system I get transform updates but I'm unsure how to process them (similar to the Diorama sample app). Is it a bug that I recieve these updates? The documentation says that ARKit data is only provided in a full space so I would expect this not to work at all. But if this is the case, why am I getting deviceAnchor values in this situation?
Posted
by tvg_123.
Last updated
.
Post not yet marked as solved
1 Replies
281 Views
I've been recently working on a VisionOS app which uses CoreMl to identify specific body parts and display a window with information of the identified body part, since the use of Vision Pro's cameras is blocked, I'm using an iPhone to perform image classification, and then send the label to the headset using Multipeer Connectivity, I'd like to display a volume once the user selects a body part, could my iPhone return enough spatial information for me to be able to fully take advantage of Vision Pro's mixed reality capabilities?
Posted Last updated
.
Post not yet marked as solved
1 Replies
194 Views
I was heavily reliant on using ARGeoAnchor in my iOS application and when started porting the app to visionOS encountered there is no equivalent there. Which is a huge bummer and showstopper to launching on Apple Vision Pro. Is there any technical limitation that didn't allow devs to port this great piece of functionality? Can we expect it to be added in the future visionOS releases?
Posted
by XSight.
Last updated
.
Post not yet marked as solved
3 Replies
1.3k Views
hi I am not sure what is going on... I have been working on this model for a while on reality composer, and had no problem testing it that way...it always worked out perfectly. So I imported the file into a brand new Xcode project... I created a new ARApp, and used SwiftUI. I actually did it twice ... And tested the version apple has with the box. In Apple's version, the app appears but the whole part where it tries to detect planes didn't show up. So I am confused. I found a question that mentions the error messages I am getting but I am not sure how to get around it? https://developer.apple.com/forums/thread/691882 // // ContentView.swift // AppToTest-02-14-23 // // Created by M on 2/14/23. // import SwiftUI import RealityKit struct ContentView : View {   var body: some View {     return ARViewContainer().edgesIgnoringSafeArea(.all)   } } struct ARViewContainer: UIViewRepresentable {       func makeUIView(context: Context) -> ARView {           let arView = ARView(frame: .zero)           // Load the "Box" scene from the "Experience" Reality File     //let boxAnchor = try! Experience.loadBox()     let anchor = try! MyAppToTest.loadFirstScene()           // Add the box anchor to the scene     arView.scene.anchors.append(anchor)           return arView         }       func updateUIView(_ uiView: ARView, context: Context) {}     } #if DEBUG struct ContentView_Previews : PreviewProvider {   static var previews: some View {     ContentView()   } } #endif This is what I get at the bottom 2023-02-14 17:14:53.630477-0500 AppToTest-02-14-23[21446:1307215] Metal GPU Frame Capture Enabled 2023-02-14 17:14:53.631192-0500 AppToTest-02-14-23[21446:1307215] Metal API Validation Enabled 2023-02-14 17:14:54.531766-0500 AppToTest-02-14-23[21446:1307215] [AssetTypes] Registering library (/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. 2023-02-14 17:14:54.716866-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.743580-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.744961-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/drPostAndComposition.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.745988-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.747245-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.748750-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.749140-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.761189-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.761611-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.761983-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.762604-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.763575-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.764859-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 18: Json Deserialization; unknown member 'EnableARProbes' - skipping. 2023-02-14 17:14:54.764902-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 20: Json Deserialization; unknown member 'EnableGuidedFilterOcclusion' - skipping. 2023-02-14 17:14:55.531748-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534559-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534633-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534680-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534733-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534777-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534825-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534871-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534955-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:56.207438-0500 AppToTest-02-14-23[21446:1307383] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [2] 2023-02-14 17:17:15.741931-0500 AppToTest-02-14-23[21446:1307414] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1] 2023-02-14 17:22:07.075990-0500 AppToTest-02-14-23[21446:1308137] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1] code-block
Posted
by popbee.
Last updated
.
Post marked as solved
2 Replies
378 Views
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
0 Replies
264 Views
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff: import UIKit # Save method extension CVPixelBuffer { func saveDepthMapToTIFF(to path: URL) { let ciImage = CIImage(cvPixelBuffer: self) let context = CIContext() do { try context.writeTIFFRepresentation( of: ciImage, to: path, format: .Lf, colorSpace: CGColorSpaceCreateDeviceGray() ) } catch { print("Failed to write TIFF: \(error)") } } } # Calling the save arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath) I am reading the file like this in Python import tifffile depth_map = tifffile.imread("test.tiff") plt.imshow(depth_map) plt.colorbar() which creates this image: The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away. Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same. Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
Posted Last updated
.
Post not yet marked as solved
0 Replies
178 Views
Hello. I'm developing the app using ARKit and RealityKit. The purpose of the app is to scan the apartment and put furniture next to the walls. It works good, but if AR session takes more than 3 mins at some point app is crashed. According to crash report it's not something related to my code. I'm attaching crash report (company data is hidden). Any help is appreciated. Thanks in advance.
Posted
by volov3ly.
Last updated
.
Post not yet marked as solved
1 Replies
239 Views
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing: let location3D = value.convert(value.location3D, from: .global, to: .scene) What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with? My goal is to place an object with its Y-axis along the normal of the surface that was tapped. For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it). Is this possible? My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information. I am hoping Apple has a simple function call that is more general than my fallback approach.
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
2 Replies
494 Views
Hi, What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have. Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network? Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000? Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots. How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects? Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
Posted
by Coderian.
Last updated
.
Post not yet marked as solved
0 Replies
176 Views
I am working on an AR app on iOS. I found this issue and can't find a quick solution at this moment nor do I find any insight into what is happening. Context: The AR app contains a 3D model of a sphere that is cut in half. The models are created with a 3D modeling software. Additional context: The sphere model is placed inside the environment. When the user enters the sphere then the sphere materials are set to video materials containing this jungle-like content visible in the video. The vertical center of the sphere is the floor on which the user moves and looks around. Each ARPlaneAnchor is connected to an ARAnchorEntity with its ModelEntity (Plane) for visualization and sphere placement. The bug (video): https://www.youtube.com/shorts/58860U1IkhM As the user moves inside the sphere parts of the video material start to show the square plane camera feed parts (background). What has been tried: changing material type on the floor plane (physically based material seems a little less bad) changing the culling of the materials (no effect). the issue is not related to z-fighting. When a solid material for the floor plane is used then the flickering is not visible. When a solid material with alpha is used then both are visible (alpha material and flickering background) editing the .usdz file (does not work at all) (changing shadows and other properties) checked .usdz file with usdz tools (usdzconvert - all tests have passed and fixed opacity did not help) Changing the video type (.mp4 to .mov) Google & ChatGPT Similar issues: How do I eliminate flickering ... Plane entity's grounding shadow flicking in RealityKit AR flickering Observations: The flickering background is always tied to the floor plane (can provide screenshots). This seems to highlight either a point of contact with the 3D floor plane or something else. The flickering happens at some certain angle and position. It does not happen all the time in all positions which is weird. This problem didn't happen with the old sphere 3D file. The difference is a new 3D-generated floor plane WHICH IS DISABLED. It seems that even if it is disabled it is still being used somehow. I had a similar issue where the floor planes would change color (the color tone would go more light and dark). This issue was solved by disabling automatic shadow rendering. Shadow rendering inside an object does not seem to work properly. The main difference is that the previous issue changed the color brightness, not color transparency (translucent), and that the whole plane color was changed and not some part of it. Logging all the available planes shows only the expected planes (1-3, of which 1-2 two are floor planes and 1 is an image plane). Any ideas, solutions, or feedback is welcome. SO Post: https://stackoverflow.com/questions/78139934/realitykit-plane-flickering-bug-in-model-with-videomaterial Thank you for your time.
Posted
by MJ_111.
Last updated
.
Post not yet marked as solved
0 Replies
187 Views
Im working on the following problem: For a measurment application i want to take a picture of something laying on the ground, and given i will have the floor plane detecred, i plan to raycast 4 points from the corners of the screens, given the raycast land on this plane, i want to use those coordinates to do a perspective transform (warp) of the camera-image onto the new coordinates. This way i should be able to perform pixel-per-cm measurments. The prolem i have is the screen coordinates dont seem to reflect the camera-frame coordinates, and im not sure how to go from one to another.
Posted
by i008.
Last updated
.