Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

RealityKit framework crash: cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect
I have a RealityKit based app in TestFlight and I see the following crash happening twice. It appears to be coming from the RealityKit framework itself in cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect has anyone seen this before and have you discovered what is causing it? Thread 32 Crashed: 0 libsystem_kernel.dylib 0x00000001cfd81fbc __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001f271f680 pthread_kill + 268 (pthread.c:1681) 2 libsystem_c.dylib 0x000000019069ab90 abort + 180 (abort.c:118) 3 Recon3D 0x0000000211b8cd7c cv3d::acv::surfacedetection::DepthMapPlaneDetector::detect(cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u>, float const*>, cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u... + 6136 (DepthMapPlaneDetector.cpp:346) 4 Recon3D 0x0000000211bb0fe4 cv3d::acv::surfacedetection::SurfaceDetector::detectAndTrack(cv3d::acv::surfacedetection::SurfaceDetector::DetectAndTrackWithDepthParams const&) + 844 (SurfaceDetector.cpp:635) 5 Recon3D 0x000000021142fd24 cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect(cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle const&) + 2672 (SurfaceDetection.cpp:645) 6 Recon3D 0x00000002114678ec cv3d::kit::concurrency::detail::ProcessorInputMessageHandlingStrategy<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd::Surf... + 92 (ProcessorInputMessageHandlingStrategy.h:136) 7 Recon3D 0x00000002114675b4 std::__1::__function::__func<void cv3d::kit::concurrency::detail::Processor<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd... + 184 (function.h:356) 8 Recon3D 0x0000000211794330 void std::__1::__invoke_void_return_wrapper<void, true>::__call<std::__1::future<void> cv3d::esn::thread::IWorkQueue::DispatchAsync<void>(std::__1::function<void ()>&&)::'lambda'()&>(std::__1::futu... + 68 (invoke.h:487) 9 Recon3D 0x0000000212387830 dispatch_async_C_CallBack + 76 (GrandCentralDispatchUtil.cpp:94) 10 libdispatch.dylib 0x00000001905e2300 _dispatch_client_callout + 20 (object.m:561) 11 libdispatch.dylib 0x00000001905e9964 _dispatch_lane_serial_drain + 956 (queue.c:3885) 12 libdispatch.dylib 0x00000001905ea3f8 _dispatch_lane_invoke + 432 (queue.c:3976) 13 libdispatch.dylib 0x00000001905eb6a8 _dispatch_workloop_invoke + 1756 (queue.c:4485) 14 libdispatch.dylib 0x00000001905f5004 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913) 15 libdispatch.dylib 0x00000001905f4878 _dispatch_workloop_worker_thread + 404 (queue.c:6507) 16 libsystem_pthread.dylib 0x00000001f271b964 _pthread_wqthread + 288 (pthread.c:2629) 17 libsystem_pthread.dylib 0x00000001f271ba04 start_wqthread + 8 (:-1)
0
0
537
Mar ’24
Multi-User AR
I'm developing a motion tracking app that takes requires a real-time view of an iPhone camera to capture the person's body. The motion is mapped to a virtual body. Currently this appears overlayed on the person that the iPhone sees. However, I want to transmit this real time 3D virtual body to a different Apple device, as an AR app, that the other user can place in their environment. Any suggestions on how I can get this 3d model to be viewable by another user (and maintain live updating based on motion tracking)?
1
0
472
Mar ’24
RoomPlan Framework v2 - Stairs missing
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
2
1
547
Mar ’24
Using CoreML in VisionOS with Multipeer Connectivity
I've been recently working on a VisionOS app which uses CoreMl to identify specific body parts and display a window with information of the identified body part, since the use of Vision Pro's cameras is blocked, I'm using an iPhone to perform image classification, and then send the label to the headset using Multipeer Connectivity, I'd like to display a volume once the user selects a body part, could my iPhone return enough spatial information for me to be able to fully take advantage of Vision Pro's mixed reality capabilities?
1
0
798
Mar ’24
ARGeoAnchor is missing in visionOS's ARKit
I was heavily reliant on using ARGeoAnchor in my iOS application and when started porting the app to visionOS encountered there is no equivalent there. Which is a huge bummer and showstopper to launching on Apple Vision Pro. Is there any technical limitation that didn't allow devs to port this great piece of functionality? Can we expect it to be added in the future visionOS releases?
1
0
400
Mar ’24
Getting to MeshAnchor.MeshClassification from MeshAnchor?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
2
0
897
Mar ’24
Dispatch queue: SurfaceDetectionNode_Scheduler crash
Hello. I'm developing the app using ARKit and RealityKit. The purpose of the app is to scan the apartment and put furniture next to the walls. It works good, but if AR session takes more than 3 mins at some point app is crashed. According to crash report it's not something related to my code. I'm attaching crash report (company data is hidden). Any help is appreciated. Thanks in advance.
1
0
388
Mar ’24
SpatialTapGesture and collision surface's normal?
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing: let location3D = value.convert(value.location3D, from: .global, to: .scene) What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with? My goal is to place an object with its Y-axis along the normal of the surface that was tapped. For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it). Is this possible? My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information. I am hoping Apple has a simple function call that is more general than my fallback approach.
1
0
583
Mar ’24
ARKit mapping from screen coordinates to Camera frame coordinates. Measurments application
Im working on the following problem: For a measurment application i want to take a picture of something laying on the ground, and given i will have the floor plane detecred, i plan to raycast 4 points from the corners of the screens, given the raycast land on this plane, i want to use those coordinates to do a perspective transform (warp) of the camera-image onto the new coordinates. This way i should be able to perform pixel-per-cm measurments. The prolem i have is the screen coordinates dont seem to reflect the camera-frame coordinates, and im not sure how to go from one to another.
0
0
383
Mar ’24
Grab frames in Vision Pro using ARFrame
I'd like to grab the current camera frame in visionOS. I have a Swift file (am new to Swift) that looks like this: import ARKit import SwiftUI class ARSessionManager: NSObject, ObservableObject, ARSessionDelegate { var arSession: ARSession override init() { arSession = ARSession() super.init() arSession.delegate = self } func startSession() { let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = .horizontal arSession.run(configuration) } // ARSessionDelegate method to capture frames func session(_ session: ARSession, didUpdate frame: ARFrame) { // Process the frame, e.g., capture image data } } and I get errors including "Cannot find type 'ARSessionDelegate' in scope". Help? Is ARFrame called something different for Vision Pro?
1
0
753
Mar ’24
Multiple active AR Sessions in RoomPlan application, who creates them?
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately. When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
2
0
734
Mar ’24
ARKit confidence level and precision 3D model (BIM)
Hi, I want to develop an AR App for construction site on which i need to prove the calibration quality of the 3D model on plane. For that i have already retrieve informations like TrackingState, points cloud, confidence map... I would like to know if the ConfidenceLevel, that appears to be an enumeration, is available or if I need to analyse the points cloud to make my own confidence level. And also if you have informations on how can I know the precision of the 3D map on real life.
0
0
285
Feb ’24
LiDAR camera low FPS problem
When I use LiDAR, AVCaptureDeviceTypeBuiltInLiDARDepthCamera is used. As AVCaptureDeviceTypeBuiltInLiDARDepthCamera is A device that consists of two cameras, one LiDAR and one YUV. I found that the LiDAR data is 30fps, even making the YUV data 30 fps. But I really need the 240fps YUV data. Is there a way to utilize the 30fps LiDAR with 240fps YUV camera? Any reply would be appreciated.
1
0
634
Feb ’24
ARKit image detection specs & expectations on visionOS
Hi, I have a code that uses ImageTrackingProvider. I am experimenting with glyphs of various complexity and structure to understand which ones would be more superior for recognition. Due to the absence of a color printer, I am mostly experimenting with monochrome glyphs as well as some color-paper squares. I am getting mixed results and would like to validate whether what I got are the expected results for the current capabilities of ARKit & VisionPro, or if there is still an opportunity for improvement by selecting different glyphs. So far, I have used a colored square of size 5x5 cm, as well as two glyphs provided below. ARKit Glyph Abstract Glyph The ARKit Glyph is not recognizable by ARKit or VisionPro at all, no matter the lighting conditions or the angles from which I view it. The Abstract Glyph is recognizable consistently at a 90-degree angle, and sometimes at other angles too. The maximum distance at which I was able to detect it was around 15cm, maybe less. I am really curious if there is any specification that I can check against to understand whether my glyphs are good or not, and at what maximum distance such glyphs can be recognized if they were 5x5cm in size. I am also curious whether ARKit is capable of recognizing images of 5x5cm size at a distance between 2 and 3 meters, and if so, how I should prepare the glyph for such requirements. Thanks in advance, Nikita ps I am skipping a question about yaw angles of image, as well as angel between normal of an image & camera view but I guess they also have their impact on ability to recognize original image.
2
0
517
Feb ’24
Entity.load vs Entity.loadModel
let apple = try Entity.load(named: "apple", in: realityKitContentBundle) works, but let apple = try Entity.loadModel(named: "apple", in: realityKitContentBundle) does not work ie. (error.localizedDescription = Failed to find resource with name "apple" in bundle) I am unsure what is causing the problem, apple.usda was created in Reality Composer Pro from primitives and has a single apple object (no root). When I load with Entity.load and print apple, I get: ▿ 'apple' : Entity, children: 1 ⟐ Transform ⟐ SynchronizationComponent ▿ 'apple' : ModelEntity ⟐ ModelComponent ⟐ Transform ⟐ CollisionComponent ⟐ PhysicsBodyComponent ⟐ SynchronizationComponent This nested hierarchy seems redundant to me, is it preferred in AR kit to have such a structure? Why am I unable to load usda directly as a ModelEntity?
1
0
583
Feb ’24
Reference image recommendations for best image tracking performance
I am working on a sports training app for VisionOS that requires recognition of fast-moving objects. Currently, I am using ImageTrackingProvider to tag the objects I need. I have noticed that while recognition works well for stationary objects, it does not perform well in tracking moving objects. I assume there are a mix of factors at play: I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery. My reference image could be suboptimal/too complex to recognize quickly. I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery. My reference image could be suboptimal/too complex to recognize quickly. While I can't do anything about #1, I am curious about recommendations for #2. Are there recommendations for the best size of a reference image, its color (would black and white work better?), and its complexity? Also, since the ARKit Resource Group seems to support JPEG & PNG, is there any specific preference for one over the other? Should I prepare the images in any special way to achieve the best possible performance? Thanks.
1
0
707
Feb ’24