RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Maintain rotation sense while dragging
Hi, I heve a problem with an visionOS app and I couldn't find a solution. I have 3D carousel with cards and when I use the drag gesture and I drag to the left I want the carousel to rotate clockwise and when I drag to the right I want the carousel to rotate counter clockwise. My problem is when I rotate my body more than 90 degrees to the left or to the right the drag gesture changes it's value and the carousel rotates in the opposite direction? Do you know how can I maintain the right sense taking into account that the user can rotate his body? I've tried to take the user orientation with device tracking and check if rotation on Y axis is greater than 90 degrees in both direction but It is a very small area bettween 70-110 degrees when it still rotates in the opposite direction. I think that's because the device traker doesn't update at the same rate as drag gesture or it doesn't have the same acurracy.
1
0
116
13h
Hover Effect & Tap Problem
Hi, since I updated my device to visionOS 2.0 or higher I have some problems with my app. Sometimes when I go with my eyes over buttons or the tabview which is a SwiftUI component that many apps use, the hover effect doesn't trigger anymore, I can tap on the icon from the tabview but doesn't extend when I hover it to se the description of the button. It is weird because it doesn't happen all the time. But in VisionOS 1.0 doesn't happen at all. My second issue is that I have an navbar as an attachment and this attachment has a draggable modifier that we created. The buttons are not tappable until I drag the navbar a little, this also never happened in VisionOS 1.0 but always happen in Vision 2.0+ because I've tested in the simulator with different versions. Is it possible these problems to be related to handTracking Service that we use from ARKit? Because sometimes when we close the trackers the app works as intended.
0
0
124
4d
Xcode 16 cannot load AR scenes from .rcproject files
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS. The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file How do I submit a ticket to Apple?
0
0
98
4d
How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
How can I access the player’s camera vector in VisionOS, specifically using RealityKit? In Unity and other game engines, there’s often an API like Camera.main.transform.forward for this purpose. I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit. Is there a related API for this? Any guidance would be greatly appreciated. Thanks!
1
0
226
1w
Need to get The VisonOS Player's Camera forward vector on runtime
How can I access the player’s camera vector in VisionOS, specifically using RealityKit? In Unity and other game engines, there’s often an API like "Camera.main.transform.forward for this purpose. " I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit. Is there a related API for this? Any guidance would be greatly appreciated. Thanks!
1
0
210
1w
How to use `unproject(_:from:to:ontoPlane:)` of Content of RealityView
When using ARView of RealityKit, I can code like this let results = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any) to get the 3D position of where I tap on the plane. In iOS 18, we can use RealityView and I found that unproject(_:from:to:ontoPlane:) may implement the same function, but I don't know how to set the ontoPlane parameter. Can someone help me with some code snippets?
1
0
177
1w
RealityKit Rotation Origin
I am trying to rotate topEntity around the origin point of shapeEntity, but have not found a way to do so. topEntity is an entity group that also contains shapeEntity, so I cannot set topEntity as a child of shapeEntity. From Blender I set the correct origin of topEntity, but when I import the usd model into Reality Composer Pro it does not save the origin point and there is no way to set the origin in Reality Composer Pro. DragGesture() .targetedToEntity(where: .has(CustomComponent.self)) .onChanged({ value in let rotation = -Float(value.translation.height) let clampedRotation = min(max(rotation, 0), 45) if value.entity.name == "grab"{ if let topEntity = selectedEntity.findEntity(named: "top"), let shapeEntity = selectedEntity.findEntity(named: "Shape_1"){ topEntity.transform.rotation = simd_quatf( angle: clampedRotation * .pi / 180, axis: SIMD3(x: 0, y: 0, z: 1) ) } } })
1
0
95
1w
RealityKit Crash with Orthographic Camera
In macOS project with RealityKit and SwiftUI, adding OrthographicCameraComponent causes crashes in both Xcode Preview and at runtime. import SwiftUI import RealityKit struct ContentView: View { var body: some View { RealityView { content in var camera = Entity() var component = OrthographicCameraComponent() component.scale = 5 camera.position = [0, 0, 5] camera.components.set(component) content.add(camera) content.add(ModelEntity(mesh: .generateSphere(radius: 1))) } } } #Preview { ContentView() } Has anyone faced this issue or knows a fix?
3
1
210
1w
Parent is changed during gesture interaction producing incorrect relative translation values.
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture. The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values. I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem. My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values? Project Link: Private STEPS TO REPRODUCE Run the app. Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem). Tap the button in the timeline to create a new project. Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1). There should now be a green 3d model added to the stage. Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage). Re-run the app to see that the green element is offset to a new location, not the last dragged location. To reset and try again, delete the project canvas next to the project name (trash button) then restart the app. Areas of concern: RealityKitView is the only file you may need. Line 119 is where we create new child entities Lines 185-219 is where we persist and apply persisted values. You can also search FIXME in the file to see areas of concern. Tip: I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
1
0
247
1w
Rendering bug when layering transparent textures front and back
If I put an alpha image texture on a model created in Blender and run it on RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below. I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and t hen impor ted it into Reality Composer Pro. When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her, t he following behaviors were obser ved in t he transparent areas ・The transparent areas do not become transparent ・The transparent areas become transparent toget her wit h t he image behind t hem the order of t he images becomes incorrect Best regards.
1
0
209
1w
Attachment always user facing
Hello, Is there a way to always have the attachments of a RealityView always face the user? For example, in a visionOS app, in an immersive space, we have an attachment. When the user either walks around the attachment, or rotates the parent entity, we would like the attachment to automatically rotate to face the user. How do we do this? I anticipated this to be a trivial feature to implement, since I thought I remembered seeing this feature as a built-in/opt-in option for attachments. But, I cannot find that feature. All and any recommendations are appreciated, thanks.
2
0
178
2w
How To Create Dual Screen of AR using RealityView and SwiftUI iOS 18
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but it is not Act as an AR but rather Static VR on a Camera background so as I move the iPhone the cube move with it and that's not suppose to happen if its Anchored in a plane or to world coordinate. import SwiftUI import RealityKit struct ContentView : View { let anchor1 = AnchorEntity(.camera) let anchor2 = AnchorEntity(.camera) var body: some View { HStack (spacing: 0){ MainView(anchor: anchor1) MainView(anchor: anchor2) } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in content.camera = .spatialTracking let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()]) anchor.addChild(item) content.add(anchor) anchor.position.z = -1.0 anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) } } } the thing is if I remove .camera like this let anchor1 = AnchorEntity() let anchor2 = AnchorEntity() It would work as AR Anchored to world coordinates but on the other hand is does not work but on the left view only not both views Meanwhile this was so easy before RealityView and SwiftUI by cloning the view like in ARSCNView Example : import UIKit import ARKit class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate { //create Any Two ARSCNView's in Story board // and link each to the next (dont mind dimensions) @IBOutlet var sceneView: ARSCNView! @IBOutlet var sceneView2: ARSCNView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. sceneView.delegate = self sceneView.session.delegate = self // Create SceneKit box let box = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0.01) let item = SCNNode(geometry: box) item.geometry?.materials.first?.diffuse.contents = UIColor.green item.position = SCNVector3(0.0, 0.0, -1.0) item.orientation = SCNVector4(0, 1, 1, .pi/4.0) // retrieve the ship node sceneView.scene.rootNode.addChildNode(item) } override func viewDidLayoutSubviews() // To Do Add the 4 Buttons { // Stop Screen Dimming or Closing While The App Is Running UIApplication.shared.isIdleTimerDisabled = true let screen: CGRect = UIScreen.main.bounds let topPadding: CGFloat = self.view.safeAreaInsets.top let bottomPadding: CGFloat = self.view.safeAreaInsets.bottom let leftPadding: CGFloat = self.view.safeAreaInsets.left let rightPadding: CGFloat = self.view.safeAreaInsets.right let safeArea: CGRect = CGRect(x: leftPadding, y: topPadding, width: screen.size.width - leftPadding - rightPadding, height: screen.size.height - topPadding - bottomPadding) DispatchQueue.main.async { if self.sceneView != nil { self.sceneView.frame = CGRect(x: safeArea.size.width * 0 + safeArea.origin.x, y: safeArea.size.height * 0 + safeArea.origin.y, width: safeArea.size.width * 0.5, height: safeArea.size.height * 1) } if self.sceneView2 != nil { self.sceneView2.frame = CGRect(x: safeArea.size.width * 0.5 + safeArea.origin.x, y: safeArea.size.height * 0 + safeArea.origin.y, width: safeArea.size.width * 0.5, height: safeArea.size.height * 1) } } } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) let configuration = ARWorldTrackingConfiguration() sceneView.session.run(configuration) sceneView2.scene = sceneView.scene sceneView2.session = sceneView.session } } And here is the video for it
1
0
165
2w
ARVR RealityView Showing left Camera view Entity near more than the right view
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but for some reason the left side showing the Entity (Box) more near to the camera than the right side which make it not identical, I wonder is this a bug and need to be fixed or what ? thanx Here is the code import SwiftUI import RealityKit struct ContentView : View { let anchor1 = AnchorEntity(.camera) let anchor2 = AnchorEntity(.camera) var body: some View { HStack (spacing: 0){ MainView(anchor: anchor1) MainView(anchor: anchor2) } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in content.camera = .spatialTracking let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()]) anchor.addChild(item) content.add(anchor) anchor.position.z = -1.0 anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) } } } And Here is the View
0
0
162
2w
RealityView to show two screens of AR in iOS 18/macOS 15 using SwiftUI
I have an issue using RealityView to show two screens of AR, while I did succeed to make it as a non AR but now my code not working. Also it is working using Storyboard and Swift with SceneKit, so why it is not working in RealityView? import SwiftUI import RealityKit struct ContentView : View { var body: some View { HStack (spacing: 0){ MainView() MainView() } .background(.black) } } struct MainView : View { @State var anchor = AnchorEntity() var body: some View { RealityView { content in let item = ModelEntity(mesh: .generateBox(size: 0.2), materials: [SimpleMaterial()]) content.camera = .spatialTracking anchor.addChild(item) anchor.position = [0.0, 0.0, -1.0] anchor.orientation = .init(angle: .pi/4, axis:[0,1,1]) // Add the horizontal plane anchor to the scene content.add(anchor) } } }
2
0
153
2w
RealityView Attachments on iOS 18 & Visually Appealing AR Labeling Alternatives
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18. Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18: import SwiftUI import RealityKit struct PassportRealityView: View { let qrCodeCenter: SIMD3<Float> let assetID: String var body: some View { RealityView { content, attachments in // Setup your AR content, such as markers or 3D models if let qrAnchor = try? await Entity(named: "QRAnchor") { qrAnchor.position = qrCodeCenter content.add(qrAnchor) } } attachments: { Attachment(id: "passportTextAttachment") { Text(assetID) .font(.title3) .foregroundColor(.white) .background(Color.black.opacity(0.7)) .padding(5) .cornerRadius(5) } } .frame(width: 300, height: 400) } } When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit. As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
0
0
193
2w
[ARKit, Reality Composer Pro] Is it possible loading Immersive Scene after recognizing preregistered images?
Hi! I wanna know that if it's possible that loading Immersive Scene after scanning(recognizing) preregistered images or objects? I tried to load the Immersive scene after scanning image and objects, it didn't work well. Please let me know about the solution if it's possible. Here the ImmersiveView.swift code i tried. // ImmersiveView.swift import SwiftUI import RealityKit import RealityKitContent // Using the RealityKitContent module struct ImmersiveView: View { @ObservedObject var viewModel: TrackingViewModel @State private var immersiveScene: Entity? @State private var isToggleOn: Bool = false // Variable for toggle state var body: some View { ZStack { // Overlay RealityView and UI elements RealityView { content in if let scene = immersiveScene { content.add(scene) print("Immersive scene successfully added.") if let moneyGunsEntity = scene.findEntity(named: "MoneyGuns") { NotificationCenter.default.post( name: Notification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "PlayTimeline" ] ) print("PlayTimeline notification sent.") } else { print("MoneyGuns entity not found.") } } } .onAppear { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { immersiveScene = scene } else { print("Failed to load immersive scene.") } } } VStack { Spacer() Toggle(isOn: $isToggleOn) { // Add toggle button Text("Toggle Option") .foregroundColor(.white) } .padding() .background(Color.black.opacity(0.7)) .cornerRadius(8) .padding() } } } }
1
0
292
2w
PhotogrammetrySession on non Pro Iphone
Hello, I'm creating an app that use PhotogrammetrySession Class to build 3D objects from photographs (https://developer.apple.com/documentation/realitykit/creating-3d-objects-from-photographs). I'm wondering why this class is working only on Pro iphone (12 Pro, 13 Pro, 14 Pro, 15 Pro and 16 Pro) and none non-Pro iPhone. My app does not use Lidar so it's not the problem. I thought it could be power-related but a18 soc from iPhone 16 is more powerful than a14 bionic from iPhone 12 Pro (i could also mention iPhone 13 Pro and iPhone 14 that both have a15 bionic whereas only the first one is compatible). Did I miss something that could explain these restrictions ? Is there any plan to make this class usable by every iPhone enough powerful to run it ? Thanks in advance for answering me
0
0
120
2w
Xcode 16.2 Beta fails to build visionOS app RealityContent assets
I've just updated to macOS 15.2 Beta and Xcode 16.2 Beta and just noticed I can't build my visionOS app as before. I have a separate /Volumes/Development APFS volume for Xcode projects, plus utilising a Relative setting for DerivedData so that these are located in the projects' directories – in this case, subfolder on this volume mentioned. I just found that when attempting to build the app in either Xcode 16.1 or 16.2 Beta with visionOS 2.1 SDK, I'm getting a following error: Finished processSwiftFiles() with a failure 'You don’t have permission to save the file “CustomComponentUSDInitializers.usda” in the folder “RealityAssetsGenerated”.' Sandbox: realitytool(6216) deny(1) file-write-create /Volumes/Development/travel-visionos/DerivedData/Tripomatic/Build/Intermediates.noindex/RealityKitContent.build/Debug-xrsimulator/RealityKitContent_RealityKitContent.build/DerivedSources/RealityAssetsGenerated/CustomComponentUSDInitializers.usda.sb-02874eb3-PS4qfZ Failure creating schema - 'You don’t have permission to save the file “CustomComponentUSDInitializers.usda” in the folder “RealityAssetsGenerated”.' This for sure worked previously with 15.0/16.0 or 15.1/16.1 as I've even published the app to the App Store. It seems to be some shenanigans related to sandboxing &amp; permissions for Xcode tools which I can hardly work around. I have Xcode added to Developer Tools and Full disk access sections in Privacy &amp; Security, yet that doesn't really change anything in relation to this issue. When I move the project to a subfolder in my home directory, the build succeeds. The same applies to switching DerivedData setting to the default value to that DD are generated in ~/Library/Developer folder for all projects opened in Xcode. Thus, either macOS or Xcode now has an issue with running realitytool to generate files on my dedicated volume. Should I consider this Xcode or macOS Beta issue? Report it via Feedback Assistant maybe?
0
0
165
2w