Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

visionOS simulator on the iPadOS device
I was starting to test visionOS SDK on an existing project that has been running fine on iPad (iOS 17) with Xcode 15. It can be configured to run on visionOS simulator on a MacBook that runs M1 chip without any change in Xcode’s project Build Settings. However the Apple Vision Pro simulator doesn’t appear when I run Xcode 15 on Intel MacBook Pro, unless I change the SUPPORTED_PLATFORMS key on the Xcode’s project Build Settings to visionOS. Although, I can understand that a MacBook pro running M1 / M2 chip would be the ideal platform to run the visionOS simulator, it’s so much better if we can run the visionOS simulator on iPadOS, as it has the same arm64 architecture, and it has all the hardware needed to run camera, GPS, and Lidar. The Mac is not a good simulator, even though it has an M1 / M2 chip, first of all: It doesn’t have a dual facing camera (front and back) It doesn’t have a Lidar It doesn’t have GPS It doesn’t have a 5G cellular radio It’s not portable enough for developers to design use cases around spatial computing Last but not least, I have problems or not very clear on simulating ARKit with actual camera frames on a VisionPro simulator, while I would estimate this can be simulated perfectly on an iPadOS. My suggestion is to provide us developers with a simulator that can be run on iPadOS, that will increase developers adoption and improve the design and prototyping phase of apps running on the actual Vision Pro device.
4
2
2.2k
Jun ’23
Reality View 3d objects behind physical objects
I have been playing with RealityKit and ARKit. One thing I am not able to figure out is if it's possible to actually place an object, say on a floor behind a couch and not be able to see it if viewing the area it was place from the other side of the couch. If thats confusing I apologize. Basically I want to "hide" objects in a closet or behind other physical objects. Are we just not there yet with this stuff? Or is there a particular way to do it I am missing? It just seems odd when I place an object then I see it "on top" of the couch from the other side. Thanks! Brandon
2
0
683
Jun ’23
Scene understanding missing from visionOS simulator?
SceneReconstructionProvider.isSupported and PlaneDetectionProvider.isSupported both return false when running in the simulator (Xcode 15b2). There is no mention of this in release notes. Seems that this makes any kind of AR apps that depend on scene understanding impossible to run in the sim. For example, this code described in this article is not possible to run in simulator: https://developer.apple.com/documentation/visionos/incorporating-surroundings-in-an-immersive-experience Am I missing something or is this really the current state of the sim? Does this mean if we want to build mixed-immersion apps we need to wait to get access to Vision Pro hardware?
11
11
2.8k
Jun ’23
how to reduce scanned ARReferenceObject file?
While making ARKit object detection application, the scanned object(ARReferenceObject) is 5~20MB for detecting an object smoothly. Is there a way to reduce this size? Why is it needed? I have more than 200 objects to detect. and if an object takes 5MB, then almost 1GB will be occupied only for my application, which seems not appropriate.
2
0
416
Jul ’23
AR Scanner
I'm trying to scan a real world object with Apple ARKit Scanner . Sometimes the scan is not perfect, so I'm wondering if I can obtain an .arobject in other ways, for example with other scanning apps, and then merge all the scans into one single more accurate scan. I know that merging is possible, in fact, during the ARKit Scanner session the app prompts me if I want to merge multiple scans, and in that case I can select previous scan from my files app, in this context I would like to add from other sources. Is it possible ? And if yes, are out there any other options to obtain an .arobject, or is that a practical way to improve the quality of object detection? Thanks
0
0
504
Jul ’23
Moving a Rigged character with Armature Bones Question
Is there a way to move a Rigged Character with its Armature Bones in ARKit/RealityKit? I am trying to do this When I try to move using JointTransform the usdz robot provided in https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/capturing_body_motion_in_3d It gives me the following: I see the documentation on Character Rigging etc. But is the movement through armature bones only available through a third party software. Or can it be done in Reality Kit/Arkit/RealityView? https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/rigging_a_model_for_motion_capture
0
0
723
Jul ’23
How I can change material, like diffuse, on 3D model (.usdz or .obj)?
Hi all. I am new to swift and AR. I'm trying a project on AR and ran into a problem that I can't change the material on the models. With geometry such as a sphere or a cube, everything is simple. Tell me what am I doing wrong? My simple code: @IBOutlet var sceneView: ARSCNView! var modelNode: SCNNode! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self sceneView.showsStatistics = true let scene = SCNScene(named: "art.scnassets/jacket.usdz")! modelNode = scene.rootNode.childNode(withName: "jacket", recursively: true) let material = SCNMaterial() material.diffuse.contents = UIImage(named: "art.scnassets/58.png") modelNode.childNodes[0].geometry?.materials = [material] sceneView.scene = scene
0
0
615
Jul ’23
Animating faces
I’m embarking on a new project that will involve animating 3D faces & mouths. I’m looking at using ARFaceAnchors and blendShapes to capture data that will be used to animate the models’ facial expressions. I have a few basic questions: (1) As far as I can tell, Apple has not supported exporting Memojis to rigged 3D models. Is this still the case? (2) I did find one web site that said Apple’s AvatarKit is now public, but everywhere else I’ve checked, it is still a private framework (and Xcode complains). Is AvatarKit still private? (3) It looks like all 52 blendShapes for an ARFaceAnchor are updated every frame, which updates 60 times a second This is 3120 data points per second. Are there any best practice guides to reduce the data? For example, “These 10 blendShapes capture the most important features for animating a face.” (4) It appears that visionOS does not support ARFaceAnchor. If I want to present a remote user as a Memoji (or other rigged model) in a shared experience, is there any way to do that at the current time?
0
0
476
Jul ’23