RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Post

Replies

Boosts

Views

Activity

How to overlay an image on part of a 3D model?
How to overlay an image in RealityKit on a 3D model using code so that it does not stretch to the entire object, but has its own height and width that I can change? I have a solution on how to do this, but then it will not be possible to change the height, width or place it anywhere on the 3D model. And this is to cut out a part of the object and overlay the image on the entire cutout area. How to overlay a 2D image on a 3D model without stretching the photo to the entire 3D object? If this is possible, please give an example of how to do this in code. I could not find on the Internet how to do this. Although in other engines this can be done, for example, in Blender or Unity. If I am not mistaken, this is done there using decals
0
1
358
Jul ’24
USDZ models look broken on iOS 18 / visionOS 2 beta
I noticed that with the 4th betas of iOS 18 and visionOS 2, some USDZ models' texture mapping looks completely broken. The issue occurs only with a device, not with the Simulator. It's a regression, the models look fine with iOS 17.5.1 and visionOS 1.2. The issue occurs if I load a model as an Entity in a RealityView iOS or visionOS, or in a SwiftUI 3DModel view on visionOS. Has anyone seen this too? Is there a workaround? I filed a bug report with a minimal example project, it's FB14473756. Screenshot on Vision Pro device: Screenshot on Vision Pro Simulator:
1
2
668
Jul ’24
EnvironmentLightingConfigurationComponent not working
Has anyone gotten EnvironmentLightingConfigurationComponent to work? I tried the code from https://developer.apple.com/documentation/realitykit/environmentlightingconfigurationcomponent to prevent a planet from being lit by the environment. My goal is that the side that isn't lit by the star appears pitch black. However, the code seems to have no effect on visionOS 2 and iPadOS 18 (I tried betas 1 through 4, on device, built with Xcode 16 beta 4). No matter if there is a PointLight or no light at all in the scene, no matter if I use SimpleMaterial or PhysicallyBasedMaterial, no matter if I use a texture or a color on the sphere. I filed a bug report, it's FB14470954. Or am I doing something wrong? Here's my code: var material = PhysicallyBasedMaterial() if let tex = try? await TextureResource(named: "planet.jpg") { material.baseColor = .init(texture: .init(tex)) material.emissiveIntensity = 0 let sphereMesh = MeshResource.generateSphere(radius: 0.5) let entity = ModelEntity() entity.components.set(ModelComponent(mesh: sphereMesh, materials: [material])) entity.position = [-1, 1.0, -1.0] let envLightingConfig = EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0) entity.components.set(envLightingConfig) content.add(entity) }
1
1
531
Jul ’24
Trasparency Not Rendered Properly for Some View Directions
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems. In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood. In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered. This is a blocking problem for the development of this demo.
5
4
952
Apr ’24
Implementing a bouncing surface
I am trying to simulate a pinball game and I want to use PhysicsBody & PhysicsMotion to achieve that. I tuned the parameters around in PhysicsBodyComponent, but the result is not quite ideal for now. Imagine a fully inflated basketball bouncing high off the ground (ground vs basketball). I assign PhysicsBodyComponent and CollisionComponent to both basketball and the ground. For basket ball, I set it as: dynamic mode mass 1, inertia .one Material.Restitution 1 Angular Damping and Linear Damping to 0 AddForce to make the basketball move to hit the ground For ground, I set it as: static mode mass 1, inertia .zero Material.Restitution 1 Angular Damping and Linear Damping to 0 However, when the basket ball hit the ground, it isn't that bouncy, the basketball behaves like hitting to a cotton and the linear speed just dumps fast. Wonder how I could achieve the bouncing effect like real basketball vs ground.
4
0
934
Jul ’24
VisionOS: Simultaneous Drag & Rotate gestures
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so: Gestures: var drag: some Gesture { DragGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene) } .onEnded { value in itemTranslation += gestureTranslation gestureTranslation = .init() } } var rotate: some Gesture { RotateGesture3D() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureRotation = simd_quatf(value.rotation.quaternion).inverse } .onEnded { value in itemRotation = gestureRotation * itemRotation gestureRotation = .identity } } var magnify: some Gesture { MagnifyGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureScale = Float(value.magnification) } .onEnded { value in itemScale *= gestureScale gestureScale = 1.0 } } RealityView modifiiers: .simultaneousGesture(drag) .simultaneousGesture(rotate) .simultaneousGesture(magnify) RealityView update block: entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition entity.orientation = gestureRotation * itemRotation entity.scaleAll(itemScale * gestureScale)
2
1
982
May ’24
Trying to traverse through a usdz file to copy materials from another usdz file to the traversed mesh
Hi All, I am using RealityKit along with ARKit and Swift UI to develop an app where I am augmenting a usdz model of a complex geometry like that of a car. I have some other usdz files with a simple plane geometry having the material properties embedded within them which also i am loading as model entities. I want to traverse through my car usdz file such that i can pick the material from simple usdz file and apply it to the car as car paint. To do this i know the name of the mesh holding the car paint as well as the name of the material applied. I have tried to traverse through the usdz files using both RealityKit and SceneKit but I am not successful to reach to the lowest mesh and copy the material properties to it. With RealityKit, I have tried to get the instance data using modelEntity as follows :- "sourceModel?.model?.mesh.contents.instances". But this returns instance id, model name and transform only. Any help will be highly appreciated. Thank You
1
0
491
Jul ’24
RealityKit, DrawableQueue, and synchronizing scene updates
I have a visionOS app that utilizes DrawableQueue and CADisplayLink to update an Entity, TextureResource tied to the drawable, and a Material that uses that TextureResource. TextureResource gets updated with when a video frame is ready. Material properties can get updated from the video or from other sources. Current process: when each video frame is ready, we get the next drawable, render to it, present it, and make an Entity update (e.g. transform). However, I’m experiencing jitter in the rendered content where it seems that the updates to the entity and the drawable being presented are milliseconds off from each other. Should I be using Drawable.presentOnSceneUpdate() to ensure all updates happen in the same update cycle? And if so, do you have any additional details on how to correctly use this function (the docs are unclear)?
0
1
436
Jul ’24
How to optimise RealityKit performance with many similar objects
I have code such as the following. The performance on the Vision Pro seems to get quite bad once I hit a few thousand of these models. It feels like I should be able to optimise this somehow, perhaps using instancing. Is that possible with RealityKit in visionOS 2? let material = UnlitMaterial(color: .white) let sphereModel = ModelEntity( mesh: .generateSphere(radius: 0.001), materials: [material]) for index in 0..<5000 { let point = generatedPoints[index] let model = sphereModel.clone(recursive: false) model.position = [point.x, point.y, point.z] parent.addChild(starModel) }
0
1
551
Jul ’24
How to reduce draw call count in RealityKit
I'm trying to render a large number of entities, it looks like each ModelEntity causes a draw call, even if you share the ModelComponent so each Entity shares the mesh and materials. I tried to use the MeshInstanceCollection inside MeshResource to generate a large number of objects in the scene, the code works and draws many objects but the draw count is still one call per instance, this seems strange I would assume it should only be one draw call for the single entity since I have specified to use instancing in the resource. Has anybody else successfully used instancing in RealityKit to draw a large number go Entities (maybe around 10,000) or drawn this amount of items successfully with 60fps any other way? Here is some sample code that draws 100 cubes using instancing but still causes 100 draw calls. func instanceTest(scene: RealityKit.Scene) { var resource = MeshResource.generateBox(size: 0.2) var contents = MeshResource.Contents() contents.models = resource.contents.models var arr: [MeshResource.Instance] = [] var matrix = matrix_identity_float4x4 matrix[3, 0] = 0.5 for i in 0..<100 { let inst = MeshResource.Instance(id: "\(i)", model: "MeshModel", at: matrix) arr.append(inst) } contents.instances = MeshInstanceCollection(arr) let updatedResource = try? MeshResource.generate(from: contents) let unlitMaterial = UnlitMaterial(color: .red) let modelEntity = ModelEntity( mesh: updatedResource!, materials: [unlitMaterial] ) let anchor = AnchorEntity() anchor.addChild(modelEntity) scene.addAnchor(anchor) }
2
1
677
Jun ’24
Unable to create PhysicsJoint using Entity's Geometric Pin
Hello, I'm trying to attach one entity to another entity via the new PhysicsFixedJoint. I have a usdz that contains a skeletal pose which expose the joints as pins as desired. However the when I access the pin, it is returning a GeometricPin, instead of an EntityGeometricPin as you would expect. I can't use the returned GeometricPin to create the joint. Am I missing something? Shouldn't access the Entity's pins object return EntityGeometricPins instead of GeometricPin? Here is the code sample: var body: some View { RealityView { content in if let scene = try? await Entity(named: "Scene", in: untitledBundle) { content.add(scene) let attack = try! Entity.load(named: "Attack01_SingleSword") let anchor = scene.findEntity(named: "Root") anchor?.addChild(attack) let sword = try! Entity.load(named: "OHS08_Sword") anchor?.addChild(sword) if let swordEntity = findModelComponentEntity(entity: sword) { let swordPin = swordEntity.pins.set( named: "test", position: SIMD3<Float>.zero ) if let attackEntity = findModelComponentEntity(entity: attack) { let attackPin = attackEntity.pins["root/pelvis/spine_01/spine_02/spine_03/clavicle_r/upperarm_r/lowerarm_r/hand_r/weapon_r"]! // This is returning GeomtricPin instead of the EntityGeometricPin that the "pins" object contains let joint = PhysicsFixedJoint( pin0: swordPin, pin1: attackPin // This is a compile error since it is not an EntityGeometricPin type ) try! joint.addToSimulation() } } } } }
2
0
692
Jun ’24
RealityKit on 2D devices with new betas
Hi all, I'm playing around with RealityKit to see if I can re-use the content for both iOS/macOS and visionOS with the new betas. For the 2D devices, I'm looking at a more traditional, non AR setup. I've fallen over at the first hurdle; dragging an object around on a plane just as a test of how things all work. I was trying to unproject from a plane to the view/window co-ordinates and move the box around based on the result. The code below works if I angle the plane, weirdly; but not if the plane (as I understand it) is 'flat' on the ground. Am I doing this the wrong way? It behaves in a similar fashion with both the default PerspectiveCameraComponent and OrthographicCameraComponent. import SwiftUI import RealityKit struct ContentView: View { var body: some View { RealityView { content in let cubemesh = MeshResource.generateBox(size: 0.2, cornerRadius: 0.05) let cubeModel = ModelEntity(mesh: cubemesh) cubeModel.generateCollisionShapes(recursive: false) cubeModel.components.set(InputTargetComponent()) content.add(cubeModel) let cameraEntity = Entity() cameraEntity.components.set(OrthographicCameraComponent()) //cameraEntity.components.set(PerspectiveCameraComponent()) let cameraPosition: SIMD3<Float> = [10, 10, 5] let target: SIMD3<Float> = .zero cameraEntity.look(at: target, from: cameraPosition, relativeTo: nil) content.add(cameraEntity) } .gesture(DragGesture(coordinateSpace: .global) .targetedToAnyEntity() .onChanged() { value in let planeTransform = Transform(scale: SIMD3<Float>(1, 1, 1), rotation: simd_quatf(angle: 0, axis: SIMD3<Float>(0, 1, 0)), translation: SIMD3<Float>(0, 0, -1)) print(planeTransform.matrix) #if !os(visionOS) if let placementPosition = value.unproject(value.location, from: .global, to: .scene, ontoPlane: (planeTransform.matrix)) { print("projected value:", placementPosition) value.entity.position.x = placementPosition.x value.entity.position.y = placementPosition.y value.entity.position.z = 0 } #endif print(value.location) }) } } #if os(visionOS) #Preview("3D Device", windowStyle: .volumetric) { if #available(visionOS 2.0, *) { ContentView() .volumeBaseplateVisibility(.visible) .frame(depth: 1300) .frame(width: 1280) .frame(height: 1280) } else { ContentView() } } #else #Preview("2D Device") { ContentView() } #endif
0
0
472
Jun ’24
Can I get a point on a texture in realityKit on VisionOS?
Hi, I am currently considering porting my AR game from SceneKit to RealityKit so it appears in 3D on Vision OS, but one crucial question to knonw if it can even be ported is: I need a tap on a 3D Model that is than translated into a tap onto the texture of that model. On visionOS this would be the gaze of the person so the question is if I can get the point of a texture is user is looking at (optimally always to I can have a hover effect, but if not at least when tapping the finger together). If that is not possible, is it possible to touch a reality kit object and get the location of that on the texture? All the best Christoph
4
0
644
Jun ’24
Triangle count and texture size budget for RealityKit on visionOS
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
2
0
971
May ’24
SceneKit or RealityKit for Non-AR Game Development
Hi everyone, I'm choosing a framework for developing a game that doesn't involve augmented reality (AR) and I'm unsure whether to use SceneKit or RealityKit. I would like to hear from Apple engineers on this matter. Which of these frameworks is better suited for creating non-AR games? Additionally, I'd like to know if it's possible to disable AR in RealityKit using the updated RealityView? Thanks in advance for your insights and recommendations!
2
0
1.2k
Jun ’24