RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

443 Posts
Sort by:
Post not yet marked as solved
0 Replies
40 Views
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand. (I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
Posted
by
Post not yet marked as solved
0 Replies
39 Views
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so: Gestures: var drag: some Gesture { DragGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene) } .onEnded { value in itemTranslation += gestureTranslation gestureTranslation = .init() } } var rotate: some Gesture { RotateGesture3D() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureRotation = simd_quatf(value.rotation.quaternion).inverse } .onEnded { value in itemRotation = gestureRotation * itemRotation gestureRotation = .identity } } var magnify: some Gesture { MagnifyGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureScale = Float(value.magnification) } .onEnded { value in itemScale *= gestureScale gestureScale = 1.0 } } RealityView modifiiers: .simultaneousGesture(drag) .simultaneousGesture(rotate) .simultaneousGesture(magnify) RealityView update block: entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition entity.orientation = gestureRotation * itemRotation entity.scaleAll(itemScale * gestureScale)
Posted
by
Post not yet marked as solved
0 Replies
83 Views
I start a project for iPad/iPhone and I set SwiftUI - RealityKit and I can’t get the build to compile. I do nothing but create a project and hit run. So I am wondering if it’s even possible to run RealityKit on just an iPad anymore. I then tried to use Reality composer to import a basic cylinder shape to my project and that wouldn’t run either. So I am wondering how to get a 3D model into my iPad app so that the user can interact with it. Thanks for any help
Posted
by
Post not yet marked as solved
0 Replies
68 Views
[visionOS Question] I’m using the hierarchy of an entity loaded from a RealityKit Pro project to drive the content of a NavigationSplitView. I’d like to render any of the child entities in a RealityKitView in the detail pane when a user selects the child entity name from the list in the NavigationSplitView. I haven’t been able to render the entity in the detail view yet. I have tried updating the position/scaling to no avail. I also tried adding an AnchorEntity and set the child entity parent to it. I’m starting to suspect that the way to do it is to create a scene for each individual child entity in the RealityKit Pro project. I’d prefer to avoid this approach as I want a data-driven approach. Is there a way to implement my idea in RealityKit in code?
Posted
by
Post not yet marked as solved
1 Replies
68 Views
Currently in an app I am working on, we are adding collision shapes/components to objects by using the ShapeResource.generateConvex method to generate the shape from the mesh of our ModelEntity. Unfortunately, this does not result in a totally accurate collision shape. The following example is how the collision component looks currently. Is there anyway to generate a collision shape that fits the exact bounds of the ModelEntity?
Posted
by
Post not yet marked as solved
1 Replies
76 Views
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug... RealityView { content, attachments in contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5)) content.add(contentEntity!) } attachments: { Text("Hello!") }.task { await loadImage() await runSession() await processImageTrackingUpdates() }
Posted
by
Post not yet marked as solved
2 Replies
92 Views
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews. Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects. In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP. On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached. Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between. Has anyone encountered this and is there an appropriate way of setting these relationships up? Thanks
Posted
by
Post not yet marked as solved
1 Replies
118 Views
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code: VIEWMODEL CODE: Observable class ImageTrackingModel { var session = ARKitSession() // ARSession used to manage AR content var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity var rootEntity = Entity() // Root entity to which all other entities are added let imageInfo = ImageTrackingProvider( referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper") ) init() { setupImageTracking() } func setupImageTracking() { if ImageTrackingProvider.isSupported { Task { try await session.run([imageInfo]) for await update in imageInfo.anchorUpdates { updateImage(update.anchor) } } } } func updateImage(_ anchor: ImageAnchor) { let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES if imageAnchors[anchor.id] == nil { rootEntity.addChild(entity) imageAnchors[anchor.id] = true print("Added new entity for anchor \(anchor.id)") } if anchor.isTracked { entity.transform = Transform(matrix: anchor.originFromAnchorTransform) print("Updated transform for anchor \(anchor.id)") } } } APP: @main struct MyApp: App { @State var session = ARKitSession() @State var immersionState: ImmersionStyle = .mixed private var viewModel = ImageTrackingModel() var body: some Scene { WindowGroup { ModeSelectView() } ImmersiveSpace(id: "appSpace") { ModeSelectView() } .immersionStyle(selection: $immersionState, in: .mixed) } } Content View: RealityView { content in Task { viewModel.setupImageTracking() } } //Im serioulsy so clueless on how to use this view
Posted
by
Post marked as solved
1 Replies
102 Views
Context https://developer.apple.com/forums/thread/751036 I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack At runtime I'm loading: Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project) A Model3D view that pulls in the robot model from the web url A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason. Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot). Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once? Screenshot Code struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at // https://developer.apple.com/ } } Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!) SkyboxView() // RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz") RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz") } }
Posted
by
Post not yet marked as solved
0 Replies
181 Views
When the dinosaur protrudes from the portal in the Encounter Dinosaurs app, it appears to be lit by the real room lighting, just like any other RealityKit content is by default. When the dinosaur is inside of the portal, it appears to be lit by the virtual environment, and the two light sources seem to be smoothly blended between at the plane of the portal. How is this done? ImageBasedLightReceiverComponent allows the IBL to be changed on a per-entity basis, but the actual lightning calculation shader code seems to be a black box, and I have not seen a way to specify which IBL texture is used on a per-fragment basis.
Posted
by
Post not yet marked as solved
1 Replies
123 Views
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit. As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/ I think the process should be: Download the file from the web and store in temporary storage with the FileManager API Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file Add the entity to content in the Reality View. I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect. Is there any official guidance or a code sample for this use case?
Posted
by
Post not yet marked as solved
1 Replies
97 Views
Hello, I am doing to load model from bundle and it is loaded successfully. Now I am scaling model using GestureExtension from apple demo code. (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures?changes=_8) @State private var selectedEntityName : String = "" @State private var modelEntity: ModelEntity? var body: some View { contentView .task { do { modelEntity = try await ModelEntity.loadArcadeMachine() } catch { fatalError(error.localizedDescription) } } } @ViewBuilder private var contentView: some View { if let modelEntity { RealityView { content, attachments in modelEntity.position = SIMD3<Float>(x: 0, y: -0.3, z: -5) print(modelEntity.transform.scale) modelEntity.transform.scale = [0.006, 0.006, 0.006] content.add(modelEntity) if let percentTextAttachment = attachments.entity(for: "percentage") { percentTextAttachment.position = [0, 50, 0] modelEntity.addChild(percentTextAttachment) } } update: { content, attachments in // I want here to get updated scaling value and it is showing in RealityView attachmnt text. } attachments: { Attachment(id: "percentage") { Text("\(modelEntity.name) \(modelEntity.scale * 100) %") .font(.system(size: 5000)) .background(.red) } } // This method am using for gesture support .installGestures() } else { ProgressView() } } } Below code from GestureExtension let state = EntityGestureState.shared guard canScale, !state.isDragging else { return } let entity = value.entity if !state.isScaling { state.isScaling = true state.startScale = entity.scale } let magnification = Float(value.magnification) entity.scale = state.startScale * magnification state.magnifyValue = magnification magnifyScale = Double(magnification) print("Entity Name ::::::: \(entity.name)") print("Scale ::::::: \(entity.scale)") print("Magnification ::::::: \(magnification)") print("StartScale ::::::: \(state.startScale)") > This "magnification" value I need to use in RealityView class. How can i Do it? Could you please guide it. }
Posted
by
Post not yet marked as solved
2 Replies
146 Views
I'm trying to implement the playback of an HLS content with FairPlay, and I want to insert it into a RealityView using a VideoMaterial of a sphere. When I use unencrypted HLS content everything works correctly, but when I use FairPlay it doesn't. To initialize FairPlay I am using the following in the view: let contentKeyDelegate = ContentKeySessionDelegate(licenseURL: licenseURL, certificateURL: certificateURL) // Create the Content Key Session using the FairPlay Streaming key system. let contentKeySession = AVContentKeySession(keySystem: .fairPlayStreaming) contentKeySession.setDelegate(contentKeyDelegate, queue: DispatchQueue.main) contentKeySession.addContentKeyRecipient(asset) Has anyone else encountered this problem? Note: I'm testing in Vision Pro directly because the simulator hasn't support for FairPlay.
Posted
by
Post not yet marked as solved
2 Replies
153 Views
extension Entity { func addPanoramicImage(for media: WRMedia) { let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } problem: case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Posted
by
Post marked as solved
3 Replies
208 Views
I am developing an app in mixed immersive native app on Vision Pro. In my RealityView, I add my scene by content.add(mainGameScene). Normally the anchored position (original coord) should be the device position but on the ground (with y == 0 on the ground). At least this is how I understand the RealityViewContent works. So if I place something at position (0, 0, -1.0), the object should be in the front of you but on the floor (z axis is pointing backwards) However recently I load a different scene and I add that with same code, content.add(mainGameScene), something has changed, my scene randomly anchored on the floor or ceiling, according to the places I stand or sit. When I open Visualizations of my anchoring point, I could see that anchor point I am using is on the ceiling. The correct one (around my foots) is left over there. How could I switch to the correct anchored position? Or does any setting can change the behavior of default RealityViewContent?
Posted
by
Post not yet marked as solved
0 Replies
116 Views
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m ) Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing. However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS. Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ). Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer. Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
Posted
by
Post not yet marked as solved
3 Replies
225 Views
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
Posted
by