Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Geometry recognition and measurement from MeshAnchor
FYI. The source code of the FindSurface demo app for Apple Vision Pro (visionOS) is available now. The Swift package of FindSurface™ library is required to build the source code into the demo app. https://github.com/CurvSurf/FindSurface-visionOS After starting the app, the floating panels (below) will appear on your right side, and you will see wireframe meshes that approximately describe your environments. Performing a spatial tap (pinching with your thumb and index finger) with staring at a location on the meshes will invoke FindSurface, with an indicator (blue disk) appearing on the surface you've gazed. Voice commands: “Tap” – Spatial tap (gazing & pinching). Invoke FindSurface. “Tap plane” – Plane selection. “Tap sphere” or “Tap ball” – Sphere selection. “Tap cylinder” – Cylinder selection. “Tap cone” – Cone selection. “Tap torus” or “Tap donut” – Torus selection. “Tap accuracy” or “Tap measurement accuracy” – Accuracy selection. “Tap mean distance”, “Tap average distance”, or “Tap distance” – Avg. Distance selection. “Tap touch radius” or “Tap seed radius” – Touch Radius selection. “Tap Inlier” – “Show inlier points” toggle. “Tap outline” – “Show geometry outline” toggle. “Tap clear” – “Clear Scene” click.
8
0
917
12h
Draw Indoor Navigation path using ARKit iOS
I have indoor map and I want to draw path between two route location ex. from A to B I want to draw ARKit based Arrow path in ios Application. Currently I am using ARAnchor to achieve this but challenges is if A to B is 10 meter and I am adding Nodes on each one meter so instead of 10 different nodes i am getting single Arrow nodes showing all 10 in it. I am using below code. // Below Code from where I am calling addArpath function if let lat1 = mCurrentPosition?.latitude, let long1 = mCurrentPosition?.longitude { let latEnd = steplocation.latitude let longEnd = steplocation.longitude // if let lastLat = arrpath.last?.latitude,let lastLong = arrpath.last?.longitude,let lastAltitude = arrpath.last?.altitude{ let userLocation = CLLocation(latitude: lat1, longitude: long1) let endLocation = CLLocation(coordinate: CLLocationCoordinate2DMake(CLLocationDegrees(latEnd), CLLocationDegrees(longEnd)), altitude: CLLocationDistance(steplocation.altitude), horizontalAccuracy: CLLocationAccuracy(5), verticalAccuracy: CLLocationAccuracy(0), course: CLLocationDirection(-1), speed: CLLocationSpeed(5), timestamp: Date()) let heading = getHeadingForDirectionFromCoordinate(from: userLocation, to: endLocation) let lon1 = degreesToRadians(long1) //DegreesToRadians(long1) let lon2 = degreesToRadians(longEnd); //DegreesToRadians(longEnd); let lat2 = degreesToRadians(latEnd); let dLon = lon2 - lon1 let y = sin(dLon) * cos(lat2); yVal = yVal + y // let distanceToendpoint = calculateDistance(lat: endLocation.coordinate.latitude, long: endLocation.coordinate.longitude) let distvalue = Int(distance) + Int(pathlength) distance += CGFloat(distvalue) for i in stride(from: 0, to: distance, by:1) { print("current loop iteration is:" ,i) let headingValue = heading - self.heading zValue = zValue + headingValue distanceVal = CGFloat(i) + distanceVal zGlobal = zValue // Calling addARPathtoLocation addARPathtoLocation(stepLocation:endLocation,zvalue: zValue, yvalue: yVal, distance:Float(i), direction: manuoverType) } // } } // MARK: - ARSessionDelegate func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { guard !(anchor is ARPlaneAnchor) else { return } let sphereNode = generateArrowNodes(anchor: anchor) DispatchQueue.main.async { node.addChildNode(sphereNode) } } //create ARAnchor to add to nodes func generateArrowNodes(anchor: ARAnchor) -> SCNNode { let imageMaterial = SCNMaterial() imageMaterial.isDoubleSided = true imageMaterial.diffuse.contents = UIImage(named: "blueArrow") let plane = SCNPlane(width:0.5, height:0.5) plane.materials = [imageMaterial] plane.firstMaterial?.isDoubleSided = true let blueNode = SCNNode(geometry: plane) blueNode.name = "blueNode" blueNode.position = SCNVector3(x:Float(zGlobal), y:0, z:Float(distanceVal)) blueNode.eulerAngles.x = -.pi / 2 blueNode.eulerAngles.y -= Float(CGFloat(CGFloat.pi/4*6)) return blueNode } func addARPathtoLocation(stepLocation: CLLocation, zvalue: CGFloat, yvalue: CGFloat, distance:Float, direction:VMEManeuverType) { // give you the depth of anything ARKit has detected guard let query = sceneView.raycastQuery(from: sceneView.center , allowing: .estimatedPlane, alignment: .any) else { return } let results = sceneView.session.raycast(query) guard let hitResult = results.first else { print("No surface found") return } // Add ARAnchor to Scene let anchor = ARAnchor(transform: hitResult.worldTransform) sceneView.session.add(anchor: anchor) } func radiansToDegrees(_ radians: Double) -> Double { return (radians) * (180.0 / Double.pi) } func degreesToRadians(_ degrees: Double) -> Double { return (degrees) * (Double.pi / 180.0) } func getHeadingForDirectionFromCoordinate(from: CLLocation, to: CLLocation) -> Double { let fLat = degreesToRadians(from.coordinate.latitude) let fLng = degreesToRadians(from.coordinate.longitude) let tLat = degreesToRadians(to.coordinate.latitude) let tLng = degreesToRadians(to.coordinate.longitude) let deltaL = tLng - fLng let x = sin(deltaL) * cos(tLng) //cos(tLat) * sin(deltaL) let y = cos(fLat) * sin(tLat) - sin(fLat) * cos(tLat) * cos(deltaL) let bearing = atan2(x,y) let bearingInDegrees = bearing.toDegrees print("Bearing Degrees :",bearingInDegrees) // sanity check // let degree = radiansToDegrees(atan2(sin(tLng-fLng)*cos(tLat), cos(fLat)*sin(tLat)-sin(fLat)*cos(tLat)*cos(tLng-fLng))) if bearingInDegrees >= 0 { return bearingInDegrees } else { return bearingInDegrees + 360 } }
0
0
457
Jun ’24
Question about ARKit Object Tracking Capabilities
Hi everyone, I'm curious about the capabilities of ARKit's object tracking feature. Specifically, I'd like to know: Is there a size limit for the objects that can be tracked? Can ARKit differentiate between two objects with the same shape but different models (e.g., different colors)? Are objects with single colors and generic shapes (like squares or circles) effectively trackable? Any insights or examples from your experiences would be greatly appreciated! Thanks in advance.
1
0
613
Jul ’24
Help with Virtual Hands
At the end of the WWDC 2023 sessoion https://developer.apple.com/wwdc23/10111 the session talks about implementing Virtual Hands. However the Code shown in the Video is not correct anymore with the latest version of VisionOS and also the example “SpaceGloves“ Entity referenced in the Video is not documented and there is no example project resource to reference either. I have been trying for the last two weeks to implement the same basic example shown in this video, including skinning and rigging my own hand meshes But have had significant issues doing so and found no working code/usdz examples to reference across different internet resource. Is it possible for a working project/code example including USDZ files for the Hand Meshes to be provided in order to test a working example of this as a starting point for building my own virtual hands? Thanks for your help.
0
4
452
Jun ’24
Capture Video from my own app using enterprise APIs in visionOS
Hello, I want to capture video from Vision Pro in the Vision OS app. I am referring to the (https://developer.apple.com/videos/play/wwdc2024/10139/) Apple video and their code. step like below import ARKit com.apple.developer.arkit.main-camera-access.allow = true in info.plist Do below code func loadCameraFeed() async { // Main Camera Feed Access Example let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { return } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } print(cameraFrameUpdates) for await cameraFrame in cameraFrameUpdates { print(cameraFrame) guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } pixelBuffer = mainCameraSample.pixelBuffer } } I want to convert "pixelBuffer" into video streaming and show it in a frame like iOS. Please guide me on how to achieve my next step. I am blank after this code.
1
0
876
Jun ’24
Trouble with Core ML Object Tracking for Spherical Objects Using WWDC Sample Code and Object Capture
Hi everyone, I'm working with Core ML for object tracking and have successfully trained a couple of models. However, when I try to use the reference object file in the object tracking sample code from the WWDC video, it doesn't work. I'm training the model on a single-color plastic spherical object. Could this be the reason for the issue? I also attempted to use USDZ 3D assets that resemble the real object. Do these need to be trained with the Object Capture app to work properly? Speaking of the Object Capture app, my experience hasn't been great. When I scanned my spherical object, the result was far from a sphere—it looked more like a mushy dough. Any insights or suggestions would be greatly appreciated!
2
0
792
Jul ’24
Significant drop in a IMU data results in session interruption
Since moving up to IOS18 lat week I am getting an indication that there was a significant drop in IMU data being sent. Using the search capability, I can find very little information in the Developer Documentation that will tell me what the cause is and how to remedy it. Is there some documentation repository like Tech Notes that will tell me what I need to know to get going again? What additional sources of documentation are available for developers? The Search engine used for Developer documentation just does not cut it because it delivered a lot of useless entries that have no obvious relevance to my search terms. "2024-06-20 19:27:00.669334-0500 RoomPlanExampleApp[902:299709] [Technique] ARWorldTrackingTechnique <0x104bf6d80>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 53902.785827, Last known timestamp: 53901.416828, Delta: 1.368999, System timestamp: 53902.786251, Delta between system and frame: 0.000423. }"
1
0
528
Jul ’24
VisionOS 2 - Passthrough in screen capture
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I have applied for and obtained the Enterprise API and actually can stream via the "Main camera access" API, as reported on https://developer.apple.com/videos/play/wwdc2024/10139/. My problem is that I have not found any reference to how to integrate the "Passthrough in screen capture" API into my project. Have any of you been able to do this? Thank you
8
0
1.3k
Jun ’24
Tracking of moving images is super slow on visionOS
I noticed that tracking moving images is super slow on visionOS. Although the anchor update closure is called multiple times per second, the anchor's transform seems to be updated only once in a while. Another issue might be that the SwiftUI isn't updating more often. On iOS, image tracking is pretty smooth. Is there a way to speed this up somehow on visionOS, too?
1
0
465
Jun ’24
RealityKit's ARView raycast returns nothing
Hello, I have rendered an usdz File using sceneKit's .write() method on the displayed scene. Once I load it on another RealityKit's ARView using the .nonAR mode of the camera, I am trying to use the view's raycast(from:,allowing:,alignment:) method, to get the coordinates on the model. I have applied the collisionComponents when loading the model using the .generateCollisionShapes() function to be able to interact with the modelEntity. However, the raycast result returns nothing. Is there something I am missing for it to work? Thanks!
1
0
546
Jun ’24
HandTrackingProvider duplicate updates in 2.0
Anybody try hand tracking provider in 2.0? I'm getting them in 11ms interval, as advertised, but they are duplicate. Here's a print of the timestamps. Problematic for me because I am tracking the last 5 position for a calculation and expect them to be unique. Can't find docs on this anywhere. I understand it's not truly 90 updates a second but predicted pose, however I expected the updates to include predicted poses.
0
0
372
Jun ’24
Object Tracking with RealtyView
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code: RealityView { content in if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(model) } } Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add? Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
3
0
777
Sep ’24
Capturing Perspective Camera View in RealityKit and Integrating SceneKit with RealityKit
Hello, I am currently developing an application using RealityKit and I've encountered a couple of challenges that I need assistance with: Capturing Perspective Camera View: I am trying to render or capture the view from a PerspectiveCamera in RealityKit/RealityView. My goal is to save this view of a 3D model as an image or video using a virtual camera. However, I'm unsure how to access or redirect the rendered output from a PerspectiveCamera within RealityKit. Is there an existing API or a recommended approach to achieve this? Integrating SceneKit with RealityKit: I've also experimented with using SCNNode and SCNCamera to capture the camera's view, but I'm wondering if SceneKit is directly compatible within a RealityKit scene, specifically within a RealityView. I would like to leverage the advanced features of RealityKit for managing 3D models. Is saving the virtual view of a camera supported, and if so, what are the best practices? Any guidance, sample code, or references to documentation would be greatly appreciated. Thank you in advance for your help!
4
1
635
Jun ’24
Apple: please unlock "enterprise features" for all visionOS devs!
We are developing apps for visionOS and need the following capabilities for a consumer app: access to the main camera, to let users shoot photos and videos reading QR codes, to trigger the download of additional content So I was really happy when I noticed that visionOS 2.0 has these features. However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only: https://developer.apple.com/videos/play/wwdc2024/10139/ I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space. IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.
1
3
544
Jun ’24
How to create screen-space meshes selectively in RealityKit AR Mode Using New OrthographicCameraComponent?
I'd like to create meshes in RealityKit ( AR mode on iPad ) in screen-space, i.e. for UI. I noticed a lot of useful new functionality in RealityKit for the next OS versions, including the OrthographicCameraComponent here: https://developer.apple.com/documentation/realitykit/orthographiccameracomponent?changes=_3 I think this would help, but I need AR worldtracking as well as a regular perspective camera to work with the 3D elements. Firstly, can I have a camera attached selectively to a few entities, just for those entities? This could be the orthographic camera. Secondly, can I make it so those entities are always rendered in-front, in screenspace? (They'd need to follow the camera.) If I can't have multiple cameras, what can be done in that case? Is it actually better to use a completely different view / API for layering on-top of RealityKit? I would much rather keep everything in RealityKit, however, for simplicity.
0
0
560
Jun ’24