RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

431 Posts
Sort by:
Post not yet marked as solved
0 Replies
162 Views
Hello. I am trying to load my own Image Based Lighting file in a visionOS RealityView. I used the code you get when creating a new project from scratch and selecting the immersive space to full when creating the project. With the sample file Apple provides, it works. But when I put my image in PNG, HEIC or EXR format in the same location the example file was in, it doesn't load and the error states: Failed to find resource with name "SkyboxUpscaled2" in bundle In this image you can see the file "ImageBasedLight", which is the one that comes with the project and the file "SkyboxUpscaled2" which is my own in the .exr format. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) do{ let resource = try await EnvironmentResource(named: "SkyboxUpscaled2") let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) }catch{ print(error.localizedDescription) } Does anyone have an idea why the file is not found? Thanks in advance!
Posted
by Flex05.
Last updated
.
Post not yet marked as solved
1 Replies
175 Views
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
Posted
by vill33.
Last updated
.
Post marked as solved
2 Replies
212 Views
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
Posted
by Todd2.
Last updated
.
Post marked as solved
2 Replies
205 Views
Hello, I tried to build something with scene reconstruction but I want to add occlusion on the Surfaces how can I do that? I tried to create an entity and than apply an Occlusion Material but I received an ShapeResourece and I should pass an MeshResource to create a mesh for the entity and than apply a material. Any suggestions?
Posted Last updated
.
Post not yet marked as solved
1 Replies
229 Views
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety. So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base. Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons. I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo. Potentially, this device has several marks ticked with regards to incredibly useful features in general. very good indoor tracking pass through with good fidelity hands free operation The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator) For sake of argument, lets say the app I'm building is Cave Mapper. it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights. One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that: How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift. How well do the persistent WorldAnchor objects work? How well do they work when you're in a concrete bunker or a cave without GPS? Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device? Other showstoppers? in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference) Other, practical stuff: can you wear Vision Pro with a safety helmet? does it work with gloves?
Posted
by jpenca.
Last updated
.
Post not yet marked as solved
3 Replies
873 Views
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that : It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug. 2. So i test this node with a IF node,i found that it output is weird. Below is zero should output,it is black but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0) I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
Posted
by bYsdTd.
Last updated
.
Post not yet marked as solved
0 Replies
188 Views
For me, any View that is an Attachment will rebuild at full frame rate even with nothing changing, indeed, even with no variables in the view. In addition to causing CPU usage that isn't needed, if there are @State variables in the View they do not always update. I am updating the var on DispatchQueue.main.async and most of the time it works. On occasions it is updated instantly. On other occasions it might take 30 seconds or more before the var changes are visible. If I set a BP where the @State variables are changed I can see that the change... but the new value is not visible in the View (on VisionPro). I have also used print("title (\title)") and I can see the correct version in the console but what you see in AVP in the View is not correct (though it will, eventually update). Important to note, 70% of the time the values are updated immediately. I've tried @StateObject with a class with ObservableObject and while that made it better, it doesn't fix the issue. The App is in FullImmersion at the time. I have no way of knowing if the is related or not. Below is the latest iteration of the variable. @StateObject var alertState = AlertState() class AlertState: ObservableObject { @Published var description: String = "" @Published var title: String = "" }
Posted Last updated
.
Post not yet marked as solved
5 Replies
533 Views
In the WWDC talk "Enhance your spatial computing app with RealityKit." we see how to create a portal effect with RealityKit. In the "Encounter Dinosaurs" experience on Vision Pro there is a similar portal, except this portal allows entities to stick out of the portal. Using the provided example code, I have been unable to replicate this effect. With the example code, anything that sticks out of the portal gets clipped. How do I get entities to stick out of the portal in a way similar to the "Encounter Dinosaurs" experience? I am familiar with the old way of using OcclusionMaterial to create portals, but if the camera gets between the OcclusionMaterial and the entity (such as walking behind the portal), this can break the effect, and I was unable to break the effect in the "Encounter Dinosaurs" experience. If it helps at all: I have noticed that if you look from the edge of the portal very closely, the rocks will not stick out the way that the dinosaurs do; The rocks get clipped. Therefore, the dinosaurs are somehow being rendered differently.
Posted
by CodeName.
Last updated
.
Post not yet marked as solved
1 Replies
331 Views
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis. Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen. Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above. There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample. I am using the following stack: Xcode - Latest; Swiftui; latest os in mac mini and iphone
Posted
by shohandot.
Last updated
.
Post not yet marked as solved
0 Replies
129 Views
Hello ! I'm working on an AR project using SwiftUI and RealityKit, and I've encountered a challenge. I need to pass a custom data type from a SwiftUI view to a RealityKit view(Full-Immersion). The data type in question is an Album, defined as follows: struct Album: Identifiable, Hashable { var id = UUID() var image: String var title: String var subTitle: String } Please help.
Posted
by Code2aum.
Last updated
.
Post not yet marked as solved
0 Replies
132 Views
I have no idea how to set the realityView at the bottom-trailing edge of the volume. Could someone help me? var body: some View { RealityView { content in if let scene = try? await Entity(named: "Volume", in: realityKitContentBundle) { content.add(scene) bookEntity = scene.findEntity(named: "Book") crossEntity = scene.findEntity(named: "Cross") } } .toolbar { if (isShowToolbar) { ToolbarItemGroup(placement: .bottomOrnament) { Text("The toolbar is shown") } } } .gesture(tapGesture()) } I have tried several ways, but none work, including adding a zstack to align with the bottum. For now, the bounds of my volume are as follows:
Posted
by cjlalala.
Last updated
.
Post not yet marked as solved
1 Replies
265 Views
Hi! I am making an app where I have a scene with main entity in immersive space. Since my entity is fixed in space I added a reposition entity that allows users to reposition the whole entity in space. I use the drag gesture and translation to move the whole entity in x, y and z coordinates. I wanted to implement the same behaviour that we have when we drag the window bar in visionOS windows. I tried using basic trigonometry and calculate entity rotation and translation relative to the movement but I am not getting the same movement and rotation as in visionOS windows. Does anybody have a working solution for this? I would appreaciate it a lot. :)
Posted
by darescore.
Last updated
.
Post not yet marked as solved
0 Replies
187 Views
Hey, I'm wondering what would be the proper way to add RealityView content asynchronously, while doing the heavy lifting in a background thread. My use case is that I am generating procedural geometry which takes a few seconds to complete. Meanwhile I would like the UI to show other geometry / UI elements and the Main thread to be responsive. Basically what I would like to do, in pseudocode, is: runInBackgroundThread { let geometry = generateGeometry() // CPU intensive, takes 1-2 s let entity = createEntity(geometry) // CPU intensive, takes ~1 s let material = try! await ShaderGraphMaterial(..) entity.model!.materials = [material] runInMainThread { addToRealityViewContent(entity) } } With this I am running into so many issues with especially the material, which apparently cannot be constructed on a non-main thread and cannot be passed over thread borders.
Posted
by matti777.
Last updated
.
Post not yet marked as solved
0 Replies
212 Views
I have a RealityKit based app in TestFlight and I see the following crash happening twice. It appears to be coming from the RealityKit framework itself in cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect has anyone seen this before and have you discovered what is causing it? Thread 32 Crashed: 0 libsystem_kernel.dylib 0x00000001cfd81fbc __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001f271f680 pthread_kill + 268 (pthread.c:1681) 2 libsystem_c.dylib 0x000000019069ab90 abort + 180 (abort.c:118) 3 Recon3D 0x0000000211b8cd7c cv3d::acv::surfacedetection::DepthMapPlaneDetector::detect(cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u>, float const*>, cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u... + 6136 (DepthMapPlaneDetector.cpp:346) 4 Recon3D 0x0000000211bb0fe4 cv3d::acv::surfacedetection::SurfaceDetector::detectAndTrack(cv3d::acv::surfacedetection::SurfaceDetector::DetectAndTrackWithDepthParams const&) + 844 (SurfaceDetector.cpp:635) 5 Recon3D 0x000000021142fd24 cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect(cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle const&) + 2672 (SurfaceDetection.cpp:645) 6 Recon3D 0x00000002114678ec cv3d::kit::concurrency::detail::ProcessorInputMessageHandlingStrategy<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd::Surf... + 92 (ProcessorInputMessageHandlingStrategy.h:136) 7 Recon3D 0x00000002114675b4 std::__1::__function::__func<void cv3d::kit::concurrency::detail::Processor<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd... + 184 (function.h:356) 8 Recon3D 0x0000000211794330 void std::__1::__invoke_void_return_wrapper<void, true>::__call<std::__1::future<void> cv3d::esn::thread::IWorkQueue::DispatchAsync<void>(std::__1::function<void ()>&&)::'lambda'()&>(std::__1::futu... + 68 (invoke.h:487) 9 Recon3D 0x0000000212387830 dispatch_async_C_CallBack + 76 (GrandCentralDispatchUtil.cpp:94) 10 libdispatch.dylib 0x00000001905e2300 _dispatch_client_callout + 20 (object.m:561) 11 libdispatch.dylib 0x00000001905e9964 _dispatch_lane_serial_drain + 956 (queue.c:3885) 12 libdispatch.dylib 0x00000001905ea3f8 _dispatch_lane_invoke + 432 (queue.c:3976) 13 libdispatch.dylib 0x00000001905eb6a8 _dispatch_workloop_invoke + 1756 (queue.c:4485) 14 libdispatch.dylib 0x00000001905f5004 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913) 15 libdispatch.dylib 0x00000001905f4878 _dispatch_workloop_worker_thread + 404 (queue.c:6507) 16 libsystem_pthread.dylib 0x00000001f271b964 _pthread_wqthread + 288 (pthread.c:2629) 17 libsystem_pthread.dylib 0x00000001f271ba04 start_wqthread + 8 (:-1)
Posted Last updated
.
Post marked as solved
2 Replies
277 Views
I am trying to make a shader for a disco ball lighting effect for my app. I want the light to reflect on the scene mesh. i was curious if anyone has pointers on how to do this in shader graph in reality composer pro or writing a surface shader. The effect rotates the dots as the ball spins. This is the effect in the apple clips that applies the effect to the scene mesh
Posted
by doomdave.
Last updated
.
Post not yet marked as solved
1 Replies
233 Views
Hello, I am currently working on a project where I am creating a bookstore visualization with racks and shelves(Full immersive view). I have an array of names, each representing a USDZ object that is present in my working directory. Here’s the enum I am trying to iterate over: enum AssetName: String, Codable, Hashable, CaseIterable { case book1 = "B1" case book2 = "B2" case book3 = "B3" case book4 = "B4" } and the code for adding objects I wrote: import SwiftUI import RealityKit struct LocalAssetRealityView: View { let assetName: AssetName var body: some View { RealityView { content in if let asset = try? await ModelEntity(named: assetName.rawValue) { content.add(asset) } } } } Now I get the error, when I try to add multiple objects on Button click: Unable to present another Immersive Space when one is already requested or connected please suggest any solutions. Also suggest if anything can be done to add positions for the objects as well programatically.
Posted
by Code2aum.
Last updated
.
Post not yet marked as solved
0 Replies
213 Views
Adding AVPlayer as attachment on the side using RealityKit. The video in it thought is not aligned. And thoughts on what could be going wrong? RealityView { content, attachments in let url = self.video.resolvedURL let asset = AVURLAsset(url: url) let playerItem = AVPlayerItem(asset: asset) var videoPlayerComponent = VideoPlayerComponent(avPlayer: player) videoPlayerComponent.isPassthroughTintingEnabled = true // entity.components[VideoPlayerComponent.self] = videoPlayerComponent entity.position = [0, 0, 0] entity.scale *= 0.50 player.replaceCurrentItem(with: playerItem) player.play() content.add(entity) } update: { content, attachments in // if content.entities.count < 2 { if showAnotherPlayer { if let attachment = attachments.entity(for: "Attachment") { playerModel.loadVideo(library.selectedVideo!, presentation: .fullWindow) //4. Position the Attachment and add it to the RealityViewContent attachment.position = [1.0, 0, 0] attachment.scale *= 1.0 //let radians = -45.0 * Float.pi / 180.0 //attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0)) let entity = content.entities.first attachment.setParent(entity) content.add(attachment) } } if showLibrary { if let attachment = attachments.entity(for: "Featured") { //4. Position the Attachment and add it to the RealityViewContent attachment.position = [0.0, -0.3, 0] attachment.scale *= 0.7 //let radians = -45.0 * Float.pi / 180.0 //attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0)) let entity = content.entities.first attachment.setParent(entity) viewModel.attachment = attachment content.add(attachment) } } else { if let scene = content.entities.first?.scene { let _ = print("found scene") } if let featuredEntity = content.entities.first?.scene?.findEntity(named: "Featured") { let _ = print("featured entity found") } if let attachment = viewModel.attachment { let _ = print("-- removing attachment") if let anchor = attachment.anchor { let _ = print("-- removing anchor") anchor.removeFromParent() } attachment.removeFromParent() content.remove(attachment) } else { let _ = print("the attachment is missing") } } // } } attachments: { Attachment(id: "Attachment") { PlayerView() .frame(width: 2048, height: 1024) .environment(library) .environment(playerModel) .onAppear { DispatchQueue.main.asyncAfter(deadline: .now()+1) { playerModel.play() } } .onDisappear { } } if showLibrary { Attachment(id: "Featured") { VideoListView(title: "Featured", videos: library.videos, cardStyle: .full, cardSpacing: 20) { video in library.selectedVideo = video showAnotherPlayer = true } .frame(width: 2048, height: 1024) } } } PlayerView
Posted Last updated
.
Post not yet marked as solved
0 Replies
190 Views
Hello everyone, I have just started learning the development and learning of visionPro app. I have a scene called Scene, and inside it is an object called Sphere. I want to add a drag animation to this Sphere alone. I follow the code below to achieve it. But my Sphere cannot actually be dragged in the Apple simulator. What is the reason? struct ContentView: View { @State var enlarge = false @State var offset: Point3D = .zero @State var sphereEntity: Entity? var body: some View { RealityView { content in if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(scene) sphereEntity = content.entities.first?.findEntity(named: "Sphere") sphereEntity?.components.set(InputTargetComponent(allowedInputTypes: .all)) } }.gesture(DragGesture().targetedToEntity(sphereEntity ?? Entity()).onChanged({ value in print(value.location3D) sphereEntity?.position = value.convert(value.location3D, from: .local, to: sphereEntity?.parent! ?? Entity()) })) .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ _ in print("Ssssssss") })) .onAppear() { } } }
Posted
by cjlalala.
Last updated
.
Post marked as solved
1 Replies
226 Views
Please see also the video demo of the problem I'm encountering: https://youtu.be/V0ZkF-tVgKE I've noticed that the custom Systems I've been creating for my RealityKit/visionOS app do not get updated every frame as the documentation (and common sense) would suggest. Instead, they appear to tick for a time after each UI interaction and then "stall". The systems will be ticked again after some interaction with the UI or sometimes with a large enough movement of the user. My understanding was that these Systems should not be tied to UI by default so I'm a bit lost as to why this is happening. I've reproduced this by starting from a template project and adding a very simple couple of systems. Here is the main System, which simply rotates the pair of spheres: import RealityKit import RealityKitContent import SwiftUI public struct RotationSystem: System { static let query = EntityQuery(where: .has(RealityKitContent.WobblyThingComponent.self)) public init(scene: RealityKit.Scene) { } public func update(context: SceneUpdateContext) { print("system update, deltaTime: \(context.deltaTime)") let entities = context.scene.performQuery(Self.query).map({ $0 }) for entity in entities { let newRotation = simd_quatf(angle: Float(context.deltaTime * 0.5), axis: [0, 1, 0]) * entity.transform.rotation entity.transform.rotation = newRotation } } } The component (WobblyThingComponent) is attached to a parent of the two spheres in Reality Composer Pro, and both system and component are registered on app start in the usual way. This system runs smoothly in the simulator, but not in the preview in XCode and not on the Vision Pro itself, which is kinda the whole point. Here is a video of the actual behaviour on the Vision Pro: https://youtu.be/V0ZkF-tVgKE The log during this test confirms that the system is not being ticked often. You can see the very large deltaTime values, representing those long stalled moments: system update, deltaTime: 0.2055550068616867 system update, deltaTime: 0.4999987483024597 I have not seen this problem when running the Diaroma sample project, yet when comparing side-by-side with my test projects I cannot for the life of me identify a difference which could account for this. If anyone could tell me where I'm going wrong it would be greatly appreciated as I've been banging my head against this one for days. Xcode: Version 15.3 (15E204a) visionOS: 1.1 and 1.1.1
Posted Last updated
.
Post not yet marked as solved
0 Replies
153 Views
Hi, I am implementing player using RealityKit's VideoPlayerComponent and AVPlayer. When app enter immersive space, playback beigns. But only audio playabck, I can't see video. Do I need specify entity's position and size? struct MyApp: App { @State private var playerImmersionStyle: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() } .defaultSize(width: 800, height: 200) ImmersiveSpace(id: "playerImmersionStyle") { ImmersiveSpaceView() } .immersionStyle(selection: $playerImmersionStyle, in: playerImmersionStyle) } func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration { return UISceneConfiguration(name: "My Scene Configuration", sessionRole: connectingSceneSession.role) } } struct PlayerViewEx: View { let entity = Entity() var body: some View { RealityView() { content in let entity = makeVideoEntity() content.add(entity) } } public func makeVideoEntity() -> Entity { let url = Bundle.main.url(forResource: "football", withExtension: "mov")! let asset = AVURLAsset(url: url) let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer() var videoPlayerComponent = VideoPlayerComponent(avPlayer: player) videoPlayerComponent.isPassthroughTintingEnabled = true entity.components[VideoPlayerComponent.self] = videoPlayerComponent entity.scale *= 0.4 player.replaceCurrentItem(with: playerItem) player.play() return entity } } #Preview { PlayerViewEx() }
Posted
by Chenyi.
Last updated
.