Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

SceneReconstruction alongside WorldTracking silently fails?
Hello, I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list. If I run just the scene reconstruction provider by itself, then I do get mesh updates. Is this a bug? Is it expected that it's not possible to do this? Thank you
1
0
500
Feb ’24
Importing USDZ into Reality Composer Pro doesn't include textures
I'm trying to import the USDZ file of a model with multiple textures attached to each part of the model. When I preview the file by double-clicking on the USDZ, it views fine. However, when I import it into Reality Composer Pro, it only shows the pink striped model. I also get the message - "Multiple root level objects exist for HU_EVO_SPY-8.usdc". There are so many components of the model that binding each texture to each component will be very difficult to do manually. How can I fix the file such that when I import to Reality Composer Pro, textures are attached to the model?
1
1
1.3k
Feb ’24
How to use SceneReconstruction with persisted WorldAnchors and AnchorEntities
Hi, I'm prototyping a visionOS app for which I'm trying to create the following behavior in mixed immersive space: users pinch and drag to position a model entity in the real world starting from the ray-cast of the pinch, meaning that the initial position should be on a MeshAnchor from scene reconstruction (I got that working, even though it's less precise than I expected) once the model entity is positioned, I want to anchor this to the world so that it will always stays there no matter what, from what I understand I need to create and add a WorldAnchor to a WorldTrackingProvider for that after positioning the model entity, users should be able to pinch and drag the entity to change its position and have that be persisted from then onwards It's not clear to me what the relationship between AnchorEntity(world:) and WorldAnchor is (looks like AnchorEntity(anchor:) isn't available in visionOS). What is the recommended way to keep these together? Afterwards, what is the recommended way to covert coordinate spaces between repositioned scene coordinate space and the anchor entity hierarchy coordinate space? I tried a DragGesture on the model entity and convert the translation to the scene, that works only when the scene origin hasn't changed. After it has changed, the translation is using the wrong coordinate space. Thanks for the help! Geert
2
0
521
Feb ’24
ARKit for BIM
Hi, Please forgive me if i am asking a basic question. Because after my R&D I didn't see how can I build a solution where user can scan a QR code hanging on a specific wall at a specific fixed position. So when workers scan qr code from their iOS device they could see all the wirings, pipeline e.t.c. It would be really helpful If someone can let me know if its possible with ARKit and how.
1
0
502
Feb ’24
Limitations of visionOS
Hi, What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have. Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network? Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000? Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots. How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects? Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
2
0
831
Feb ’24
Is it possible to implement a Billboard System in a volumetric window?
When i call queryDeviceAnchor in my Billboard system I get transform updates but I'm unsure how to process them (similar to the Diorama sample app). Is it a bug that I recieve these updates? The documentation says that ARKit data is only provided in a full space so I would expect this not to work at all. But if this is the case, why am I getting deviceAnchor values in this situation?
3
0
814
Feb ’24
Detecting the Anchor/Position of the Scene Glass Window in an Immersive View in VisionOS
I have a main app window that presents an Immersive style in Mixed Reality. I am trying to determine the anchor/position of this glass window in the 3D space and place a Sphere entity right next to it. The goal is to ensure that if the user moves the window, the Sphere entity remains attached to it. Does anyone have insights on how to achieve this? The below code snippet provides the position of the device, and I have positioned it 0.5 meters away from the z-axis. However, my objective is to obtain the position of the glass window and anchor the sphere to it. Any guidance on achieving this would be appreciated. import RealityKit import RealityKitContent import ARKit struct ImmersiveView: View { let visionProPose = VisionProPose() var body: some View { RealityView { content in Task { await visionProPose.runArSession() } // Add the initial RealityKit content if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(scene) } } update: { content in if let scene = content.entities.first { if let sphere = scene.findEntity(named: "Sphere") as? ModelEntity { Task { let transfrom = await visionProPose.getTransform() sphere.position = [Float((transfrom?.columns.3.x)!), Float((transfrom?.columns.3.y)!), Float((transfrom?.columns.3.z)!) - 1 ] } } } } } } @Observable class VisionProPose { let session = ARKitSession() let worldTracking = WorldTrackingProvider() func runArSession() async { Task { try? await session.run([worldTracking]) } } func getTransform() async -> simd_float4x4? { guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: 1) else { return nil } let transform = deviceAnchor.originFromAnchorTransform return transform } }
0
0
671
Feb ’24
RealityKit: "annotating" an object
Hello, I want to be able to tap on a previously-placed ModelEntity box and add a dot or a text at that location on the box (kind of like I'm adding an annotation on the box) I have something like this, but not sure how I should do it correctly: class MyARView: ARView { // ... private func didTap(_ gestureRecognizer: UITapGestureRecognizer) { let pos = gestureRecognizer.location(in: self) if !didPlaceCube { placeCube(pos) return } let hitTestResult = self.hitTest(pos) guard let firstResult = hitTestResult.first else { return} let entity = firstResult.entity let textEntity = ModelEntity(mesh: .generateText("Hello there", extrusionDepth: 0.4, font: .boldSystemFont(ofSize: 0.05), containerFrame: .zero, alignment: .center, lineBreakMode: .byWordWrapping)) textEntity.setPosition(entity.position + firstResult.position, relativeTo: entity) entity.addChild(textEntity) } // ... }
0
0
541
Feb ’24
The json decoded by capturestructure lacks the capture room data of this merger, but the usdz and plist generated by the export method are complete
I am using the room plan api to implement the function of multiple space merging, but I found that after performing multiple space merging, the generated json would miss some of the newly added areas, but the usd file and plist file were complete.Does anyone have this problem? Look forward to official support this is my code: public func mergeScan(_ data:String,_ scanName:String,_ directoryName:String){ var capturedRoomArray: [CapturedRoom] = [] //解析主结构 let jsonURL = getRootURL().appending(path: "/\(directoryName)/\(scanName)/scan.json") guard let mainStructureRoom = try?loadCapturedRoom(from: jsonURL) else { return } capturedRoomArray.append(mainStructureRoom) // 添加子结构 if let subStructureRoom = try? loadCapturedRoom(from: data) { os_log("loadCapturedRoom string data success: %@", type: .error, String(describing: data)) capturedRoomArray.append(subStructureRoom) } os_log("merge scan capturedRoomArray: %@", type: .error, String(describing: capturedRoomArray.count)) //合并 Task { do { finalStructureResults = try await structureBuilder.capturedStructure(from: capturedRoomArray) }catch { print("Merging Error:\(error.localizedDescription)") return } do{ //保存 //导出json guard let finalStructureResults else { return } try exportJson(from: finalStructureResults, to: jsonURL) //导出usd let meshDestinationURL = jsonURL.deletingPathExtension().appendingPathExtension("usdz") //导出plist let metadataDestinationURL = jsonURL.deletingPathExtension().appendingPathExtension("plist") try finalStructureResults.export(to: meshDestinationURL, metadataURL: metadataDestinationURL, exportOptions: [.mesh]) } catch { print("Merge Error:\(error.localizedDescription)") return } } } func exportJson(from capturedStructure: CapturedStructure, to url: URL) throws { let encoder = JSONEncoder() encoder.outputFormatting = [.prettyPrinted, .sortedKeys] let data = try encoder.encode(capturedStructure) try data.write(to: url) } Note: Only json is missing the content of this or the next scan, usdz and plist are complete
0
0
428
Feb ’24
Progressive immersive space and Digital Crown (and ARKit)
I am new to visionOS development, just slowly figuring out the difference in immersion styles to figure out how I want my app to behave. It seems that when you use a progressive immersive space the minimum immersion level (set via the digital crown) is not 0? Meaning, there is no way to go from mixed to full by using the Digital Crown. Even when I try to set it to 0 (such as in the Destination Video sample), it pops back up to around 30-40%, and I always see the background. Is this expected behavior, or are there some settings that allow me to change this minimum immersion level? Further, in the video 'Meet ARKit for spatial computing', it is stated that to get access to ARKit tracking data you must use a 'Full Space', not the 'Shared Space'. This wording is confusing to me. Is an ImmersiveSpace set to the .mixed (or .progressive) immersion style still a 'Full Space' (because it isn't in the shared space, with other apps)? OR, is ARKit only available in an ImmersiveSpace with the .full immersion style? Just feels like maybe 'full' is being used in two different ways here... Thanks in advance, -pj
2
0
1.1k
Feb ’24
How to use the CharacterControllerComponent with RealityView on VisionOS
I am trying to implement a game where the character walks on the scene mesh. I am controlling the character with a game controller. I noticed there is a character controller component in Reality Composer Pro, I am aware that when this component is added, the player cannot have a collision or a physics component. I need an example that covers adding an entity with the character controller component to the scene and then moving the character using the moveCharacter function. I was also looking at the documentation https://developer.apple.com/documentation/realitykit/entity/movecharacter(by:deltatime:relativeto:collisionhandler:) Here it is also looking for deltaTime. Where do we get the deltaTime from? does it come from a system's update function, does that also mean that the character controller needs to be moved in the update method? Thanks, Sarang
0
1
450
Feb ’24
Reference image recommendations for best image tracking performance
I am working on a sports training app for VisionOS that requires recognition of fast-moving objects. Currently, I am using ImageTrackingProvider to tag the objects I need. I have noticed that while recognition works well for stationary objects, it does not perform well in tracking moving objects. I assume there are a mix of factors at play: I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery. My reference image could be suboptimal/too complex to recognize quickly. While I can't do anything about #1, I am curious about recommendations for #2. Are there recommendations for the best size of a reference image, its color (would black and white work better?), and its complexity? Also, since the ARKit Resource Group seems to support JPEG & PNG, is there any specific preference for one over the other? Should I prepare the images in any special way to achieve the best possible performance? Thanks.
2
0
352
Feb ’24
Reference image recommendations for best image tracking performance
I am working on a sports training app for VisionOS that requires recognition of fast-moving objects. Currently, I am using ImageTrackingProvider to tag the objects I need. I have noticed that while recognition works well for stationary objects, it does not perform well in tracking moving objects. I assume there are a mix of factors at play: I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery. My reference image could be suboptimal/too complex to recognize quickly. I am not sure if ARKit is actually built for tracking moving objects, so there could be a refresh rate limit enforced to save battery. My reference image could be suboptimal/too complex to recognize quickly. While I can't do anything about #1, I am curious about recommendations for #2. Are there recommendations for the best size of a reference image, its color (would black and white work better?), and its complexity? Also, since the ARKit Resource Group seems to support JPEG & PNG, is there any specific preference for one over the other? Should I prepare the images in any special way to achieve the best possible performance? Thanks.
1
0
668
Feb ’24
Entity.load vs Entity.loadModel
let apple = try Entity.load(named: "apple", in: realityKitContentBundle) works, but let apple = try Entity.loadModel(named: "apple", in: realityKitContentBundle) does not work ie. (error.localizedDescription = Failed to find resource with name "apple" in bundle) I am unsure what is causing the problem, apple.usda was created in Reality Composer Pro from primitives and has a single apple object (no root). When I load with Entity.load and print apple, I get: ▿ 'apple' : Entity, children: 1 ⟐ Transform ⟐ SynchronizationComponent ▿ 'apple' : ModelEntity ⟐ ModelComponent ⟐ Transform ⟐ CollisionComponent ⟐ PhysicsBodyComponent ⟐ SynchronizationComponent This nested hierarchy seems redundant to me, is it preferred in AR kit to have such a structure? Why am I unable to load usda directly as a ModelEntity?
1
0
556
Feb ’24
ARKit image detection specs & expectations on visionOS
Hi, I have a code that uses ImageTrackingProvider. I am experimenting with glyphs of various complexity and structure to understand which ones would be more superior for recognition. Due to the absence of a color printer, I am mostly experimenting with monochrome glyphs as well as some color-paper squares. I am getting mixed results and would like to validate whether what I got are the expected results for the current capabilities of ARKit & VisionPro, or if there is still an opportunity for improvement by selecting different glyphs. So far, I have used a colored square of size 5x5 cm, as well as two glyphs provided below. ARKit Glyph Abstract Glyph The ARKit Glyph is not recognizable by ARKit or VisionPro at all, no matter the lighting conditions or the angles from which I view it. The Abstract Glyph is recognizable consistently at a 90-degree angle, and sometimes at other angles too. The maximum distance at which I was able to detect it was around 15cm, maybe less. I am really curious if there is any specification that I can check against to understand whether my glyphs are good or not, and at what maximum distance such glyphs can be recognized if they were 5x5cm in size. I am also curious whether ARKit is capable of recognizing images of 5x5cm size at a distance between 2 and 3 meters, and if so, how I should prepare the glyph for such requirements. Thanks in advance, Nikita ps I am skipping a question about yaw angles of image, as well as angel between normal of an image & camera view but I guess they also have their impact on ability to recognize original image.
2
0
495
Feb ’24
LiDAR camera low FPS problem
When I use LiDAR, AVCaptureDeviceTypeBuiltInLiDARDepthCamera is used. As AVCaptureDeviceTypeBuiltInLiDARDepthCamera is A device that consists of two cameras, one LiDAR and one YUV. I found that the LiDAR data is 30fps, even making the YUV data 30 fps. But I really need the 240fps YUV data. Is there a way to utilize the 30fps LiDAR with 240fps YUV camera? Any reply would be appreciated.
1
0
612
Feb ’24
ARKit confidence level and precision 3D model (BIM)
Hi, I want to develop an AR App for construction site on which i need to prove the calibration quality of the 3D model on plane. For that i have already retrieve informations like TrackingState, points cloud, confidence map... I would like to know if the ConfidenceLevel, that appears to be an enumeration, is available or if I need to analyse the points cloud to make my own confidence level. And also if you have informations on how can I know the precision of the 3D map on real life.
0
0
271
Feb ’24