RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Post Notification to RCP but Timeline won't fire
I am trying to use onNofitication in BehaviorComponent to fire up my composed timeline actions. Which is formed up by one TransformTo action, one Hide action and followed by a Notification action indicating the other two actions are finished. With this post, I successfully send a notification to RCP to fire up my timeline with identification: NotificationCenter.default.post( name: NSNotification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "onSomethingStart" ] ) On the other hand, to subscribe that Notification Action, I append a onReceive function below my RealityView, and succesfully received my notification private let notificationTrigger = NotificationCenter.default.publisher( for: Notification.Name("RealityKit.NotificationTrigger")) guard let entity = out.userInfo?["RealityKit.NotificationTrigger.SourceEntity"] as? Entity, let notificationName = out.userInfo?["RealityKit.NotificationTrigger.Identifier"] as? String else { return } debugPrint("Received notification: \(notificationName), entity name: \(entity.name)") Which means that my Timeline is fired up because I can received my notification in my Timeline. But the rest two actions just don't appear to be working. I played the timeline in RCP it works fine. Anything I missed to make it tick? XCode beta 16.1 VisionOS beta 9
2
0
450
Sep ’24
Collision detection between two entities not working
Hi, I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing? Thank you very much for your consideration! Below is my code; App.swift import SwiftUI @main private struct TrackingApp: App { public init() { ... } public var body: some Scene { WindowGroup { ContentView() } ImmersiveSpace(id: "AppSpace") { ImmersiveView() } } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var subscriptions: [EventSubscription] = [] public var body: some View { RealityView { content in /* LEFT HAND */ let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: . let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) leftHandIndexFingerEntity.generateCollisionShapes(recursive: true) leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)]) leftHandIndexFingerEntity.name = "LeftHandIndexFinger" content.add(leftHandIndexFingerEntity) /* 3D RECTANGLE*/ let width: Float = 0.7 let height: Float = 0.35 let depth: Float = 0.005 let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)]) rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0]) let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5]) rectangleEntity.generateCollisionShapes(recursive: true) rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])]) rectangleEntity.name = "Rectangle" rectangleAnchor.addChild(rectangleEntity) content.add(rectangleAnchor) /* Collision Handling */ let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") } subscriptions.append(subscription) } } }
1
0
376
Sep ’24
Forward and reverse animations with RealityKit on Vision Pro
Hello! I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on. Any tips on implementation? Thanks! import RealityKit import RealityKitContent struct ModelView: View { var isPlaying: Bool @State private var scene: Entity? = nil @State private var unboxAnimationResource: AnimationResource? = nil var body: some View { RealityView { content in // Specify the name of the Entity you want scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle) scene!.generateCollisionShapes(recursive: true) scene!.components.set(InputTargetComponent()) content.add(scene!) } .installGestures() .onChange(of: isPlaying) { if (isPlaying){ var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = 1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } else { var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = -1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } } } } Thanks!
2
0
477
Sep ’24
Unable to update IBL at runtime
I seem to be running into an issue in an app I am working on were I am unable to update the IBL for entity more than once in a RealityKit scene. The app is being developed for visionOS. I have a scene with a model the user interacts with and 360 panoramas as a skybox. These skyboxes can change based on user interaction. I have created an IBL for each of the skyboxes and was intending to swap out the ImageBasedLightComponent and ImageBasedLightReceiverComponent components when updating the skybox in the RealityView's update closure. The first update works as expected but updating the components after that has no effect. Not sure if this is intended or if I'm just holding it wrong. Would really appreciate any guidance. Thanks Simplified example // Task spun up from update closure in RealityView Task { if let information = currentSkybox.iblInformation, let resource = try? await EnvironmentResource(named: information.name) { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) if let iblEntity = content.entities.first(where: { $0.name == "ibl" }) { content.remove(iblEntity) } let newIBLEntity = Entity() var iblComponent = ImageBasedLightComponent(source: .single(resource)) iblComponent.inheritsRotation = true iblComponent.intensityExponent = information.intensity newIBLEntity.transform.rotation = .init(angle: currentPanorama.rotation, axis: [0, 1, 0]) newIBLEntity.components.set(iblComponent) newIBLEntity.name = "ibl" content.add(newIBLEntity) parentEntity.components.set([ ImageBasedLightReceiverComponent(imageBasedLight: newIBLEntity), EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0), ]) } else { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) } }
1
0
271
Sep ’24
Can an SDF be rendered using RealityKit?
I'm trying to ray-march an SDF inside a RealityKit surface shader. For the SDF primitive to correctly render with other primitives, the depth of the fragment needs to be set according to the ray-surface intersection point. Is there a way to do that within a RealityKit surface shader? It seems the only values I can set are within surface::surface_properties. If not, can an SDF still be rendered in RealityKit using ray-marching?
1
1
475
Sep ’24
RealityKit 3D texture
In my Metal-based app, I ray-march a 3D texture. I'd like to use RealityKit instead of my own code. I see there is a LowLevelTexture (beta) where I could specify a 3D texture. However on the Metal side, there doesn't seem to be any way to access a 3D texture (realitykit::texture::textures::custom returns a texture2d). Any work-arounds? Could I even do something icky like cast the texture2d to a texture3d in MSL? (is that even possible?) Could I encode the 3d texture into an argument buffer and get that in somehow?
1
0
449
Aug ’24
Pink Screen with VideoMaterial in ARKit
Hi everyone, I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content. Here's a simplified version of my setup: func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity { let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight let screenPlane = MeshResource.generatePlane(width: width, depth: height) let videoMaterial: Material = createVideoMaterial(videoItem: video) let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial]) return videoScreenModel } func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial { let player = AVPlayer(playerItem: videoItem) let videoMaterial = VideoMaterial(avPlayer: player) player.play() return videoMaterial } Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it? Thanks in advance!
5
0
475
Sep ’24
How to Drag Objects Separately with Both Hands
I would like to drag two different objects simultaneously using each hand. In the following session (6:44), it was mentioned that such an implementation could be achieved using SpatialEventGesture(): https://developer.apple.com/jp/videos/play/wwdc2024/10094/ However, since targetedEntity.location3D obtained from SpatialEventGesture is of type Point3D, I'm having trouble converting it for moving objects. It seems like the convert method in the protocol linked below could be used for this conversion, but I'm not quite sure how to implement it: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting/ How should I go about converting the coordinates? Additionally, is it even possible to drag different objects with each hand? .gesture( SpatialEventGesture() .onChanged { events in for event in events { if event.phase == .active { switch event.kind { case .indirectPinch: if (event.targetedEntity == cube1){ let pos = RealityViewContent.convert(event.location3D, from: .local, to: .scene) //This Doesn't work dragCube(pos, for: cube1) } case .touch, .directPinch, .pointer: break; @unknown default: print("unknown default") } } } } )
2
0
299
Aug ’24
coordinates add
This effect was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 28:00), in which the demonstration is you can add coordinates by looking somewhere on the ground and clicking., but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
427
Aug ’24
Coordinate conversion
Coordinate conversion was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 22:00), in which the demonstration is an entity that jumps out of volume into space, but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
1
0
412
Aug ’24
Draw a physical mesh
In this code: https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience It contains a physical collision reaction between virtual objects and the real world, which is realized by creating a grid with physical components. However, I don't understand the information in the document very well. Who can give me a solution? Thank you!
1
0
356
Aug ’24
Is it possible to load Reality Composer Pro scenes from URL? (VisionOS)
My visionOS app has over 150 3D models that work flawlessly via firebase URL. I also have some RealityKitContent scenes (stored locally) that are getting pretty large so I'm looking to move those into firebase as well and call on them as needed. I can't find any documentation that has worked for this and can't find anyone that's talked about it. Does anyone know if this possible? I tried exporting the Reality Composer Pro scenes as USDZ's and then importing them in but kept getting material errors. Maybe there's a way to call on each 3D model separately and have RealityKit build them into the scene? I'm not really sure. Any help would be much appreciated !
1
0
428
Aug ’24
Having trouble in loading audio file resources from RCP bundle.
RealityContentKit bundle resource issue Recently I always encounter weird loading bugs from RealityKitContent bundle. When I was trying to load audio resource as AudioFileResource or AudioFileGroupResource by loading from *.usda from RealityKitContent bundle, with this method. My code is nothing complicated but simple as below: let primPath: String = "/SampleAudios/SE_bounce_audio" guard let resource = try? AudioFileGroupResource.load(named: primPath, from: "MyScene.usda", in: realityKitContentBundle) else { return } And the runtime program "sometimes"(whenever I change something RCP it somethings work again but the behavior is unpredictable) reports that it "Cannot find MyScene.usda:/SampleAudios/SE_bounce_audio in RealityKitContent.bundle". I put MyScene.usda under the root folder of RealityKitContent package because I found that RealityKit just cannot find any *.usda scene if you didn't put that on the root level (could be a bug because of the way it indexes its files). I even double checked my .usda file with usdview, the primPath is absolutely correct. I think there are some unknown issues when RealityKitContent copy resources and build the package. I tried to play with the package Package.swift file a bit to see if I could manually copy my resources (everything) and let the package carry my resources but it just didn't work. So right now I just keep this file untouched below (just upgrade the swift-tools-version to 6.0 as only that can supports .visionOS(.v2)): // swift-tools-version:6.0 // The swift-tools-version declares the minimum version of Swift required to build this package. import PackageDescription let package = Package( name: "RealityKitContent", platforms: [ .visionOS(.v2) ], products: [ // Products define the executables and libraries a package produces, and make them visible to other packages. .library( name: "RealityKitContent", targets: ["RealityKitContent"]), ], dependencies: [ // Dependencies declare other packages that this package depends on. // .package(url: /* package url */, from: "1.0.0"), ], targets: [ // Targets are the basic building blocks of a package. A target can define a module or a test suite. // Targets can depend on other targets in this package, and on products in packages this package depends on. .target( name: "RealityKitContent" ), ] ) That is just issue one, RealityKitContent package build issue. Audio file format issue Another is about Audio File Format RCP supports. I remember is a place (WWDC?) saying .wav and .mp4 are supported to be used as audio source. But when I try to set up Spatial Audio, I find sometimes *.wav or *.mp3 can also be imported as AudioSourceFile. But the behavior is unpredictable. With two *.wav files SE_ball_hit_01.wav and SE_ball_hit_02.wav, only SE_ball_hit_01.wav is supported, 02 is reported as the format is not supported/ Check out my screenshots to see the details of two files. Two files have almost the same format (same sample rate or channel). I understand there might be different requirements for a source file to be used as Spatial or Ambient audio. But I haven't figured that out or there is nothing I can find helpful on Apple Documentation. So what is the rules? Thanks for reading and any thought is welcomed.
1
0
433
Aug ’24
Taking snapshots from WKWebView on visionos
Hi, I'm trying to capture some images from WKWebView on visionOS. I know that there's a function 'takeSnapshot()' that can get the image from the web page. But I wonder if 'drawHierarchy()' cannot work properly on WKWebView because of GPU content, is there any other methods I can call to capture images correctly? Furthermore, as I put my webview into an immersive space, is there any way I can get the texture of this UIView attachment? Thank you
0
0
292
Aug ’24
Update, Content, and Attachments on Vision Pro
I have a scene setup that uses places images on planes to mimic an RPG-style character interaction. There's a large scene background image and a smaller character image in the foreground. Both are added as content to a RealityView. There's one attachment that is a dialogue window for interaction with the character, and it is attached to the character image. When the scene changes, I need the images and the dialogue window to refresh. My current approach has been to remove everything from the scene and add the new content in the update closure. @EnvironmentObject var narrativeModel: NarrativeModel @EnvironmentObject var dialogueModel: DialogueViewModel @State private var sceneChange = false private let dialogueViewID = "dialogue" var body: some View { RealityView { content, attachments in //at start, generate background image only and no characters if narrativeModel.currentSceneIndex == -1 { content.add(generateBackground(image: narrativeModel.backgroundImage!)) } } update : { content, attachments in print("update called") if narrativeModel.currentSceneIndex != -1 { print("sceneChange: \(sceneChange)") if sceneChange { //remove old entitites if narrativeModel.currentSceneIndex != 0 { content.remove(attachments.entity(for: dialogueViewID)!) } content.entities.removeAll() //generate the background image for the scene content.add(generateBackground(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].backgroundImage)) //generate the characters for the scene let character = generateCharacter(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].characterImage) content.add(character) print(content) if let character_attachment = attachments.entity(for: "dialogue"){ print("attachment clause executes") character_attachment.position = [0.45, 0, 0] character.addChild(character_attachment) } } } } attachments: { Attachment(id: dialogueViewID){ DialogueView() .environmentObject(dialogueModel) .frame(width: 400, height: 600) .glassBackgroundEffect() } } //load scene images .onChange(of:narrativeModel.currentSceneIndex){ print("SceneView onChange called") DispatchQueue.main.async{ self.sceneChange = true } print("SceneView onChange toggle - sceneChange = \(sceneChange)") } } If I don't use the dialogue window, this all works just fine. If I do, when I click the next button (in another view), which increments the current scene index, I enter some kind of loop where the sceneChange value gets toggled to true but never gets toggled back to false (even though it's changed in the update closure). The reason I have the sceneChange value is because I need to update the content and attachments whenever the scene index changes, and I need a state variable to trigger the update function to do this. My questions are: Why might I be entering this loop? Why would it only happen if I send a message in the dialogue view attachment, which is a whole separate view? Is there a better way to be doing this?
7
0
579
Sep ’24
Object Tracking (moving objects)
From my early testing it seems like the object tracking works best for static objects. For example, if I am holding something in my hand the object tracker is slow to update. Is there anything that can be modified to decrease the tracking latency? I noticed that the Enterprise API has some override features is this something that can only be done using Enterprise?
1
0
465
Aug ’24
Composing Interactive 3d Content example Build Failure
Hello, I downloaded the most recent Xcode 16.0 beta 6 along with the example project located here Currently I am experiencing the following build failures: RealityAssetsCompile ... error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1 I saw that there is a similar issue reported. As a test I downloaded that project compiled as expected.
2
0
414
Aug ’24