Is there a way to give a "Primitive Shape" entity created through Reality Composer Pro a ModelComponent?
I have a custom ShaderGraphMaterial assigned to a primitive shape in my RC Pro scene hierarchy, and I'd like to tweak the inputs of this material programatically. I found a great example of the behavior I'm looking for here: https://developer.apple.com/videos/play/wwdc2023/10273/?time=1862
@State private var sliderValue: Float = 0.0
Slider(value: $sliderValue, in: (0.0)...(1.0))
.onChange(of: sliderValue) { _, _ in
guard let terrain = rootEntity.findEntity(named: "DioramaTerrain"),
var modelComponent = terrain.components[ModelComponent.self],
var shaderGraphMaterial = modelComponent.materials.first
as? ShaderGraphMaterial else { return }
do {
try shaderGraphMaterial.setParameter(name: "Progress", value: .float(sliderValue))
modelComponent.materials = [shaderGraphMaterial]
terrain.components.set(modelComponent)
} catch { }
}
}
However, when I try applying this example to my use-case, my project's equivalent to this line fails to execute:
var modelComponent = terrain.components[ModelComponent.self]
The only difference I can see between my case and this example is my entity is a primitive shape, whereas the example uses a model reference to a .usdz file. Is there some way to update a primitive shape entity to contain this ModelComponent in its set of components so I can reference + update its materials programmatically?
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi folks!
I have been working with a team on a Vision Pro app using Reality Composer Pro. One thing we have found is that multiple developers editing the RCPro scene are a continuous problem, similar to when multiple developers edit a storyboard.
RC Pro maintains a SceneMetadataList.json file that indexes the file contents of the project that is updated even as the scene hierarchy is opened and closed, not to mention other changes to scene content. We are getting frequent continuous version control conflicts with this file as we each make changes and edits to the scene, or even browse the scene without making any substantive changes.
It seems like it would be safe to add the SceneMetadataList.json file in a RC Pro project to .gitignore. Is that recommended? Any downsides to that?
I wanted to drag EntityA while also dragging EntityB independently.
I've tried to separate them by entity but it only recognizes the latest drag gesture
RealityView { content, attachments in
...
}
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
)
.gesture(
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
also tried using the simultaneously but didn't work too, maybe i'm missing something
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
.simultaneously(with:
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
let apple = try Entity.load(named: "apple", in: realityKitContentBundle)
works, but
let apple = try Entity.loadModel(named: "apple", in: realityKitContentBundle)
does not work
ie. (error.localizedDescription = Failed to find resource with name "apple" in bundle)
I am unsure what is causing the problem, apple.usda was created in Reality Composer Pro from primitives and has a single apple object (no root). When I load with Entity.load and print apple, I get:
▿ 'apple' : Entity, children: 1
⟐ Transform
⟐ SynchronizationComponent
▿ 'apple' : ModelEntity
⟐ ModelComponent
⟐ Transform
⟐ CollisionComponent
⟐ PhysicsBodyComponent
⟐ SynchronizationComponent
This nested hierarchy seems redundant to me, is it preferred in AR kit to have such a structure? Why am I unable to load usda directly as a ModelEntity?
How can i play a USDZ entity animation in reverse? I have tried to put a negative value to the speed as I was doing in SceneKit to make the animation reverse play but it did not work. here is my code:
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State var entity = Entity()
@State var openDoor: Bool = true
var body: some View {
RealityView { content in
if let mainDoor = try? await Entity(named: "Door.usdz") {
if let frame = mainDoor.findEntity(named: "DoorFrame")
{
frame.position = [0, 0, -8]
frame.orientation = simd_quatf(angle: (270 * (.pi / 180)), axis: SIMD3(x: 1, y: 0, z: 0))
content.add(frame)
entity = frame.findEntity(named: "Door")!
entity.components.set(InputTargetComponent(allowedInputTypes: .indirect))
entity.components.set(HoverEffectComponent())
let entityModel = entity.children[0]
entityModel.generateCollisionShapes(recursive: true)
}
}
}
.gesture(
SpatialTapGesture()
.targetedToEntity(entity)
.onEnded { value in
print(value)
if openDoor == true
{
let animController = entity.playAnimation(entity.availableAnimations[0], transitionDuration: 0 , startsPaused: true)
animController.speed = 1.0
animController.resume()
openDoor = false
}
else
{
let animController = entity.playAnimation(entity.availableAnimations[0], transitionDuration: 0 , startsPaused: true)
animController.speed = -1.0 // it does not work to reverse
animController.resume()
openDoor = true
}
}
)
}
}
The Door should open with first click which is already happening and close with second click which is not happening as it does not reverse play the animation
When I play a video with immersive mode on an apple vision pro device, I want to add a close button to close the immersive video view play screen. How can I add a button and handle the event?
I am trying to implement a game where the character walks on the scene mesh. I am controlling the character with a game controller. I noticed there is a character controller component in Reality Composer Pro, I am aware that when this component is added, the player cannot have a collision or a physics component.
I need an example that covers adding an entity with the character controller component to the scene and then moving the character using the moveCharacter function.
I was also looking at the documentation https://developer.apple.com/documentation/realitykit/entity/movecharacter(by:deltatime:relativeto:collisionhandler:)
Here it is also looking for deltaTime. Where do we get the deltaTime from? does it come from a system's update function, does that also mean that the character controller needs to be moved in the update method?
Thanks,
Sarang
I was able to add a spotlight effect to my entities using ImageBasedLightComponent and the sample code. However, I noticed that whenever you set ImageBasedLightComponent the environmental lighting is completely turned off. Is it possible to merge them somehow?
So imagine you have a toy in a the real world, and you shine a flashlight on it. The environment light should still have an effect right?
Following this thread I'm able to render a simple picture in a Plane material, however, I'm unable to scale it to show bigger than the window itself, or move it behind the window.
Here's my relevant code so far.-
var body: some View {
ZStack {
RealityView { content in
var material = UnlitMaterial()
material.color = try! .init(tint: .white,
texture: .init(.load(named: "image",
in: nil)))
let entity = Entity()
let component = ModelComponent(
mesh: .generatePlane(width: 1, height: 1),
materials: [material]
)
entity.components.set(component)
let currentTransform = entity.transform
var newTransform = Transform(scale: currentTransform.scale,
rotation: currentTransform.rotation,
translation: SIMD3(0, 0, -0.2))
entity.move(to: newTransform, relativeTo: nil)
/*
let scalingPivot = Entity()
scalingPivot.position.y = entity.visualBounds(relativeTo: nil).center.y
scalingPivot.addChild(entity)
content.add(scalingPivot)
scalingPivot.scale *= .init(x: 1, y: 1, z: 1)
*/
}
}
}
It belongs to an ImmersiveSpace I'm opening directly from my main window, but I have several issues:
The texture shows always in front of the window
I'm unable to scale it (scaling seems to affect to the texture coordinates inside the material instead of scaling the mesh itself)
I can only see the texture in the canvas preview (not in simulator)
I'm developing a vision pro application. However, when the user takes off the Apple Vision Pro device, the application goes into the background. How can I prevent this behavior programmatically?
I'd like to implement the functionality to pause/play video playback when the user taps on the immersive video playback view. How can I handle the tap event on Vision OS?
I'd like to map a SwiftUI view (in my case: a map) onto a 3D curved plane in immersive view, so user can literally immersive themselves into the map. The user should also be able to interact with the map, by panning it around and selecting markers.
Is this possible?
In visionOS, I want show a 3D Content, I can use RealityView or Mode3D, But the effect they achieve is similar. What is the difference between them and which one to use for users?
Hi,
I'm working on a simple visionOS app and I'm testing on device.
For one part of the app, I load an object in and place it on the user's hand. If I use a primitive shape, like a sphere or cylinder, this works fine. However, now I'm trying to load a an object from my RealityKitContent package. But everytime I try this, I get a an error message, resourceNotFound("Stone"), where "Stone" is one of my usda scenes.
This is what the guts of my function looks like that should return a ModelEntity:
do {
let entity = try await ModelEntity(named: "Stone", in: realityKitContentBundle)
entity.generateCollisionShapes(recursive: true)
return entity
} catch {
print("Error \(error)")
}
I can see the "Stone" in my Xcode sidebar as part of the RealityKitContent package and inside that scene, there is a simple sphere, but alas I always get this in the Xcode console, "Error resourceNotFound("Stone")"
I'm probably doing something pretty silly, hopefully it's obvious to someone else.
Thanks for the help.
Ian
In Reality Composter Pro has a triplanar projection node based on the provision of images. Is there a way to make a triplanar projection to input the dynamic material?
I have a main app window that presents an Immersive style in Mixed Reality. I am trying to determine the anchor/position of this glass window in the 3D space and place a Sphere entity right next to it. The goal is to ensure that if the user moves the window, the Sphere entity remains attached to it. Does anyone have insights on how to achieve this?
The below code snippet provides the position of the device, and I have positioned it 0.5 meters away from the z-axis. However, my objective is to obtain the position of the glass window and anchor the sphere to it. Any guidance on achieving this would be appreciated.
import RealityKit
import RealityKitContent
import ARKit
struct ImmersiveView: View {
let visionProPose = VisionProPose()
var body: some View {
RealityView { content in
Task { await visionProPose.runArSession() }
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
if let scene = content.entities.first {
if let sphere = scene.findEntity(named: "Sphere") as? ModelEntity {
Task {
let transfrom = await visionProPose.getTransform()
sphere.position = [Float((transfrom?.columns.3.x)!),
Float((transfrom?.columns.3.y)!),
Float((transfrom?.columns.3.z)!) - 1 ]
}
}
}
}
}
}
@Observable class VisionProPose {
let session = ARKitSession()
let worldTracking = WorldTrackingProvider()
func runArSession() async {
Task {
try? await session.run([worldTracking])
}
}
func getTransform() async -> simd_float4x4? {
guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: 1)
else { return nil }
let transform = deviceAnchor.originFromAnchorTransform
return transform
}
}
Let's say I've created a scene with 3 models inside side by side. Now upon user interaction, I'd like to change these models to another model (that is also in the same reality composer pro project). Is that possible? How can one do that?
One way I can think of is to just load all the individual models in RealityView and then just toggle the opacity to show/hide the models. But this doesn't seem like the right way for performance/memory reasons.
How do you swap in and out usdz models?
Hey friends, I'm using a drag gesture to rotate a parent object that contains several child colliders. When I drag slowly, sometimes the child colliders don't rotate along with the parent. Any help would be appreciated, thanks!
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let startLocation = value.convert(value.startLocation3D, from: .local, to: .scene)
let currentLocation = value.convert(value.location3D, from: .local, to: .scene)
let delta = currentLocation - startLocation
let spinX = Double(delta.y)
let spinY = Double(delta.x)
let pitch = Transform(pitch: Float(spinX * -1)).matrix
let roll = Transform(roll: Float(spinY * -1)).matrix
value.entity.transform.matrix = roll * pitch
})
Hello!
I’m trying to make a material in RealityKit that has a basic gradient. I am making an iPadOS app.
A few thoughts:
I cannot use Reality Composer Pro to do this because the Shader Graph tool only works for visionOS.
I cannot use a Metal file to create a shader because I am using a .swiftpm (app playgrounds) file targeted for Swift Playgrounds. Metal files don’t seem to work on Swift Playgrounds (it’s a Swift playground, after all).
I would prefer to not use image textures for a simple thing like this. That would take up storage. I wish it was as easy as applying a .basecolor with a UIColor, but UIColor does not support gradients.
What are my options? I know my requirements are likely not typical, but I really need to try to not break those.
I looked into CustomMaterial from RealityKit but once again, those take Metal shaders. Amazing tool, but I sadly cannot use them because I’m using a Swift Playground that doesn’t seem to with Metal files, at least it seems.
I’ve briefly done research on MetalKit? Could that help me out?
Let’s say I have a simple box in RealityKit. How would I apply a simple gradient to it given my constraints?
I really appreciate the help.
P.S. This is a SwiftUI project for reference.
EDIT:
Could I create a texture without images, perhaps by making a view into a texture and applying it? How would I do this? What are the pros and cons of this?
Another thought was could I just use MetalKit to create the gradient and apply it using CustomMaterial?
I will say that I'm kind of at a last resort — I am trying to create fairly straightforward materials. The two most important ones would be a gradient type of material with two colors and a transparent water material with a bit of refraction and perhaps maybe a little bit of reflection.
But, these are not meant to be photorealistic at all, not at all. They're meant to be fairly "2D"/simple which makes me wonder if I could just load in a texture. The only issue is that I don't know how I would do the water.
Dear Apple Developer Forum Community,
I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a "Tic Tac Toe" application using Xcode.
Upon launching Xcode and starting a new project, I followed the standard procedure for creating a simple iOS application. However, during the process, I encountered I am trying to make an app but the code showing an error when any player won the match.
I have attempted to troubleshoot the issue by see two images, but unfortunately, I have been unsuccessful in resolving it.
I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated.
Thank you very much for your time and assistance.
Sincerely,
Zipzy games