RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Adapt distance/depth of view relative to user
Hi, I'm currently working on some messages that should appear in front of the user depending on the system's state of my visionOS app. How am I able to change the distance of the appearing message relative to the user if the message is displayed as a View. Or is this only possible if I would create an enitity for that message, and then set apply .setPosition() and .relativeTo() e.g. the head anchor? Currently I can change the x and y coordinates of the view as it works within a 2D space, but as I'm intending to display that view in my immersive space, it would be cool if I can display my message a little bit further away in the user's UI, as it currently is a little bit to close in the user's view. If there is a solution without the use of entities I would prefer that one. Thank you for your help! Below an example: Feedback.swift import SwiftUI struct Feedback: View { let message: String var body: some View { VStack { Text(message) } } .position(x: 0, y: -850) // how to adapt distance/depth relative to user in UI? } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var feedbackMessage = "Hello World" public var body: some View { VStack {} .overlay( Feedback(message: feedbackMessage) ) RealityView { content in let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) let spatialTrackingSession = SpatialTrackingSession.init() _ = await spatialTrackingSession.run(configuration) // Head let headEntity = AnchorEntity(.head) content.add(headEntity) } } }
2
0
328
Sep ’24
How to remove the impact of AR real environment light sources on materials
The 3D furniture model I built uses some smooth specular reflection materials. I hope to only reflect the HDR image of the ImageBasedLight component I set myself, without reflecting the light source of the AR real environment. How to achieve this in the following scenario? How to avoid being affected by the light source of the AR real environment when using PBR materials When using Shader Graph, how can EnvironmentRadiance not be affected by the light source of the AR real environment?
2
0
359
Sep ’24
How to display spatial images or videos on swiftUI view
Hi! Now I am making a visionOS program. I have an idea that I want to embed spatial videos or pictures into my UI, but now I have encountered problems and have no way to implement my idea. I have tried the following work: Use AVPlayerViewController to play a spatial video, but it is only display spatial video when modalPresentationStyle =.fullscreen. Once embedded in swiftUI's view, it shows it as a normal 2D image. The method of https://developer.apple.com/forums/thread/733813 I also tried, using a shadergraph to realize the function of the spatial images displaying, but the material can only be attached on the entity, I don't know how to make it show up in view. I also tried to use CAMetalLayer to implement this function and write a custom shader to display spatial images, but I couldn't find a function like unity_StereoEyeIndex in unity to render binocular switching. Does anyone have a good solution to my problem? Thank you!
0
0
355
Sep ’24
Not getting camera frame using enterprise API in Vision Pro
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me. for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") } var body: some View { VStack { image .resizable() .scaledToFit() if(self.finalImage != nil){ self.finalImage! .resizable() .scaledToFit() }else{ image .resizable() .scaledToFit() } } .task { if #available(visionOS 2.0, *) { guard CameraFrameProvider.isSupported else { print("CameraFrameProvider not supported.") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left]) let cameraFrameProvider = CameraFrameProvider() do { try await arkitSession.run([cameraFrameProvider]) } catch { guard let sessionError = error as? ARKitSession.Error else { preconditionFailure("ARKitSession.run() returned a non-session error: \(error)") print("ARKitSession.run() returned a non-session error: \(error)") } } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { preconditionFailure("Failed to get an async sequence for the first format.") print("Failed to get an async sequence for the first format.") } print("cameraFrameUpdates:: \(cameraFrameUpdates)") for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: \(cameraFrame)") print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))") guard let leftSample = cameraFrame.sample(for: .left) else { print("CameraFrameProviderSample - Nil camera frame left sample") print("CameraFrameProviderSample - Nil camera frame left sample") continue } self.pixelBuffer = leftSample.pixelBuffer print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========") self.finalImage = self.setImage() } } else { // Fallback on earlier versions } } }
3
0
432
Sep ’24
How can I pause ARView?
In my app, I have an ARView that has cameraMode set to nonAR. I occasionally hide the ARView when it is not needed and reveal it again later. While the ARView is hidden, I'd like to pause the animation to save iPhone battery life. I'd also like to do this when I know that animation in my scene has paused and the contents of the view, although still visible, is static. This was possible using SceneKit, but I can't seem to find an equivalent way to do it using RealityKit. At least as of iOS 18, a hidden ARView with an empty scene appears to use approximately 30% of the CPU. How can I pause ARView so that it won't use the battery unnecessarily? Thank you for considering this question.
2
0
498
Sep ’24
RealityKit/ARKit Environment Texturing broken on iOS 18
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing). Instead just a default IBL is used by RealityKit. This happens with RealityView as well as ARView. It also happens when I explicitly opt-in to environment texturing: let worldTrackingConfig = ARWorldTrackingConfiguration() worldTrackingConfig.environmentTexturing = .automatic arView.session.run(worldTrackingConfig) Even the Xcode AR Template has this issue. I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected. I hope this can get resolved quickly since I see it as a major regression. Feedback ID: FB15091335 UPDATE: It works on my older iPhone XS (iOS 18 22A5282m) Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a)) Maybe it's related to LiDAR? Thank you! iOS 17 (works): iOS 18 (broken):
2
1
505
Sep ’24
Xcode 16 – Symbol not found: ShapeResource generateConvex
Hi, I have a RealityKit app that I am building with Xcode 16. The app has a minimum deployment target of iOS 17. If I run it on an iOS 17 device the app crashes: dyld[15716]: Symbol not found: _$s10RealityKit13ShapeResourceC14generateConvex4fromAcA04MeshD0C_tYaKFZ Referenced from: … Expected in: …/System/Library/Frameworks/RealityFoundation.framework/RealityFoundation My code looks something like this: @available(iOS, introduced: 13.0, obsoleted: 18.0) @MainActor @preconcurrency func generateNonAsyncConvexShapeResource(from meshResource: MeshResource) throws -> ShapeResource { ShapeResource.generateConvex(from: meshResource) } @available(iOS 18.0, *) func generateConvexShapeAsync(from meshResource: MeshResource) async throws -> ShapeResource { // This will only be available for iOS 18 and above return try await ShapeResource.generateConvex(from: meshResource) } if let meshResource = try? modelEntity.model?.mesh.applying(transform: transform.matrix) { if #available(visionOS 1.0, iOS 18.0, *) { try? await generateConvexShapeAsync(from: meshResource)// await shapeResources.append(.generateConvex(from: meshResource)) } else { try? generateNonAsyncConvexShapeResource(from: meshResource) } } So I actually do check for the system and only call the async variant on iOS 18. Any hints how to fix that? Thanks!
2
0
661
Sep ’24
ShaderGraphMaterial with Occlusion Surface Output fails to load on iOS and macOS
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error: RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound) This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) RealityView { content in do { let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)]) bgEntity.position.z = -0.2 content.add(bgEntity) let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial") let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial]) content.add(testEntity) content.cameraTarget = testEntity } catch { print("Shader Graph Load Error:") dump(error) } } .realityViewCameraControls(.orbit) .edgesIgnoringSafeArea(.all) Feedback ID: FB15081296
0
0
306
Sep ’24
Blurred Background (RealityKit) Shader Graph Node not working on iOS/macOS
The ShaderGraph Node Blurred Background (RealityKit) – https://developer.apple.com/documentation/shadergraph/realitykit/blurred-background-(realitykit) works fine within the RealityComposer Pro 2 editor but isn't working on iOS 18 or macOS 15. Instead of the blurred content it just renders as opaque in a single color (Screenshot 2). Interestingly it also fails to render within RealityComposer Pro when no other entities are within the scene, e.g only a background skybox set. Expected Behavior: It would be great if this node worked the same way as it does on visionOS since this would allow for really interesting and nice effects for scenes. Feedback ID: FB15081190
0
1
316
Sep ’24
OcclusionMaterial renders plain black when custom skybox environment is set
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it. This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma. Feedback ID: FB15081053
0
0
231
Sep ’24
realitytool requires Metal for this operation and it is not available in this build environment
Hello, I'm getting started for my project with Xcode Cloud since I upgraded to the macOS Sequioa Beta and Xcode 16 now refuses to archive builds for TestFlight. Somewhere very late in the build process I get the following error: realitytool requires Metal for this operation and it is not available in this build environment The log says this happens at: Compile Skybox urban.skybox My project uses RealityKit. How can I fix this issue? Thanks!
2
1
392
Sep ’24
Open the vision pro camera using Enterprise API and view it in application window
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame? Button("Camera Feed") { Task{ if #available(visionOS 2.0, *) { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { return } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } //==== print("=========================") print(mainCameraSample.pixelBuffer) print("=========================") // self.pixelBuffer = mainCameraSample.pixelBuffer } } else { // Fallback on earlier versions } } } I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
2
0
386
Sep ’24
Casting shadows on the ground
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor. I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor. Do I need to enable something to have my character cast a show on the real-world floor?
1
0
347
Sep ’24
rendering MV HEVC encoded stereoscopic video frames using RealityKit VideoMaterial and AVSampleBufferVideoRenderer on VisionOS
Hi, Im trying to use this example (https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video) to encode a stereoscopic (left eye right eye) video frame using MVHEVC. The sample project creates tagged buffers for left and right eye, and uses a writer to write the MVHEC encoded video buffers. But i after i get right and left tagged buffers, i want to use VideoMaterial and its AVSampleBufferVideoRenderer to enqueue these video frames. If i render MVHEVC encoded left eye sample buffer, and right eye sample buffer, sequentially will the AVSampleBufferVideoRenderer render it as a stereoscopic view? How does this work with VideoMaterial and AVSampleBufferVideoRenderer ? Thanks!
1
0
404
Sep ’24
VisionOS crashes Loading Entities from Disk Bundles with EXC_BREAKPOINT
Summary: I’m working on a VisionOS project where I need to dynamically load a .bundle file containing RealityKit content from the app’s Application Support directory. The .bundle is saved to disk after being downloaded or retrieved as an On-Demand Resource (ODR). Sample project with the issue: Github repo. Play the target test-odr to use with the local bundle and have the crash. Overall problem: Setup: Add a .bundle named RealityKitContent_RealityKitContent.bundle to the app’s resources. This bundle contains a Reality file with two USDA,: “Immersive” and “Scene”. Save to Disk: save the bundle to the Application Support directory, ensuring that the file is correctly copied and saved. Load the Bundle: load the bundle from the saved URL using Bundle(url: bundleURL) to initialize the Bundle object. Load Entity from Bundle: load a specific entity (“Scene”) from the bundle. When trying to load the entity using let storedEntity = try await Entity(named: "Scene", in: bundle), the app crashes with an EXC_BREAKPOINT error. ContentsOf Method Issue: If I use the Entity.load(contentsOf:realityFileURL, withName: entityName) method, it always loads the first root entity found (in this case, “Immersive”) rather than “Scene”, even when specifying the entity name. This is why I want to use the Bundle to load entities by name more precisely. Issue: The crash consistently occurs on the Entity(named: "Scene", in: bundle) line. I have verified that the bundle exists and is accessible at the specified path and that it contains the expected .reality file with multiple entities (“Immersive” and “Scene”). The error code I get is EXC_BREAKPOINT (code=1, subcode=0x1d135d4d0). What I’ve Tried: • Ensured the bundle is properly saved and accessible. • Checked that the bundle is initialized correctly from the URL. • Tested loading the entity using the contentsOf method, which works fine but always loads the “Immersive” entity, ignoring the specified name. Hence, I want to use the Bundle-based approach to load multiple USDA entities selectively. Question: Has anyone faced a similar issue or knows why loading entities using Entity(named:in:) from a disk-based bundle causes this crash? Any advice on how to debug or resolve this, especially for managing multiple root entities in a .reality file, would be greatly appreciated.
1
0
425
Sep ’24
Post Notification to RCP but Timeline won't fire
I am trying to use onNofitication in BehaviorComponent to fire up my composed timeline actions. Which is formed up by one TransformTo action, one Hide action and followed by a Notification action indicating the other two actions are finished. With this post, I successfully send a notification to RCP to fire up my timeline with identification: NotificationCenter.default.post( name: NSNotification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "onSomethingStart" ] ) On the other hand, to subscribe that Notification Action, I append a onReceive function below my RealityView, and succesfully received my notification private let notificationTrigger = NotificationCenter.default.publisher( for: Notification.Name("RealityKit.NotificationTrigger")) guard let entity = out.userInfo?["RealityKit.NotificationTrigger.SourceEntity"] as? Entity, let notificationName = out.userInfo?["RealityKit.NotificationTrigger.Identifier"] as? String else { return } debugPrint("Received notification: \(notificationName), entity name: \(entity.name)") Which means that my Timeline is fired up because I can received my notification in my Timeline. But the rest two actions just don't appear to be working. I played the timeline in RCP it works fine. Anything I missed to make it tick? XCode beta 16.1 VisionOS beta 9
2
0
440
Sep ’24
Collision detection between two entities not working
Hi, I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing? Thank you very much for your consideration! Below is my code; App.swift import SwiftUI @main private struct TrackingApp: App { public init() { ... } public var body: some Scene { WindowGroup { ContentView() } ImmersiveSpace(id: "AppSpace") { ImmersiveView() } } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var subscriptions: [EventSubscription] = [] public var body: some View { RealityView { content in /* LEFT HAND */ let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: . let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) leftHandIndexFingerEntity.generateCollisionShapes(recursive: true) leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)]) leftHandIndexFingerEntity.name = "LeftHandIndexFinger" content.add(leftHandIndexFingerEntity) /* 3D RECTANGLE*/ let width: Float = 0.7 let height: Float = 0.35 let depth: Float = 0.005 let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)]) rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0]) let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5]) rectangleEntity.generateCollisionShapes(recursive: true) rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])]) rectangleEntity.name = "Rectangle" rectangleAnchor.addChild(rectangleEntity) content.add(rectangleAnchor) /* Collision Handling */ let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") } subscriptions.append(subscription) } } }
1
0
369
Sep ’24
Forward and reverse animations with RealityKit on Vision Pro
Hello! I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on. Any tips on implementation? Thanks! import RealityKit import RealityKitContent struct ModelView: View { var isPlaying: Bool @State private var scene: Entity? = nil @State private var unboxAnimationResource: AnimationResource? = nil var body: some View { RealityView { content in // Specify the name of the Entity you want scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle) scene!.generateCollisionShapes(recursive: true) scene!.components.set(InputTargetComponent()) content.add(scene!) } .installGestures() .onChange(of: isPlaying) { if (isPlaying){ var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = 1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } else { var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = -1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } } } } Thanks!
2
0
468
Sep ’24