I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.)
I've placed an AnchorEntity in my scene like so:
AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
In my RealityView update closure, I print out this entity's position relative to "nil" like so:
wallAnchor.position(relativeTo: nil)
Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away.
I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space.
How can I bridge the gap between these two?
WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity.
Thank you
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that :
It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug.
2. So i test this node with a IF node,i found that it output is weird.
Below is zero should output,it is black
but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0)
I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
I have an immersive environment with a skybox which uses an png image inside a sphere. I added an IBL, but I am not sure what the best format / prep method is for the IBL image.
I have tried several different images for my IBL, and all are very different vibes from what I have in Blender.
My question is how can I create an IBL that's closest to Blender's Cycles rendering engine?
However, it's a rather difficult to answer question, so I want to ask some smaller questions first.
Does IBL need to be BW or will colour work?
From my tests: colour works just as well. But why does Apple only show use of BW ones? Should we use BW?
What is the best file format for IBL? Any pros/cons? Or should we just test out each format and check visually.
From my tests: PNG, OpenEXR (.exr), Radiance HDR (.hdr) all work. But which format is recommended?
Will IBL on visionOS create shadows for us? In Blender an HDRI gives shadows.
From my tests: No, IBL does not provide shadows on your loaded environment/meshes. Is "shadow baking" the only solution for the time being?
Looking at a scene in Blender which uses HDRI as global lighting, how can we best "prep" the IBL image that will give the closest light similar to Blender's Cycles rendering engine?
From my tests: I tried (as shown below)
A) make a render of just the Blender HDRI (without meshes) via 360-degree panoramic camera.
→ Usage as IBL makes everything too bright.
B) make a render of the entire Blender scene via 360-degree panoramic camera.
→ Usage as IBL makes everything washed out and yellowish.
C) Use the Sunlight.png from the sample project.
→ With this IBL the scene is too dark.
D) Use the SystemIBL.exr from another sample project.
→ With this IBL the scene looks very flat and not realistic at all.
Here I show each IBL I described above 1~4 and sample screenshots from the simulator:
A)
B)
C)
D)
The atmosphere I'm aiming for as per Blender's Cycles rendering engine:
Can anyone help me with my questions 1 ~ 4 above.
It would give me some insight in how to create immersive environments with realistic lighting & shadows. : )
Much appreciated!
— Luca
I have a RealityView and I want to add an Entity with an Attachment.
Assuming I have a viewModel manage my entities, and the addEntityGesture() will add a new Entity under the rootEntity.
RealityView { content, attachments in
// Load initial content
content.add(viewModel.rootEntity)
} update: { updateContent, updateAttachments in
//
} attachments: {
//
}
.gesture(addEntityGesture())
I know that we can create attachment in the attachments closure, and add those attachments as entities in our make closure, however, what if I want to add entity with an attachment on the fly?
In reality composer pro, when importing an USDZ model and inserting it into the scene, reality composer pro will remove the material of the model itself by default, but I don't want to do this. So how can reality composer pro not remove the material of the model itself?
How can we move the player within a RealityKit/RealityView scene? I am not looking for any animation or gradual movement, just instantaneous position changes.
I am unsure of how to access the player (the person wearing the headset) and it's transform within the context of a RealityView.
The goal is to allow the player to enter a full space in immersive mode and explore a space with various objects. They should be able to select an object and move closer to it.
All of a sudden (like when XCode 15.2 left beta yesterday?) I can't build attachments into my RealityView:
var body: some View {
RealityView { content, attachments in
// stuff
} attachments: {
// stuff
}
Produces "No exact matches in call to initializer" on the declaration line (RealityView { content, attachments in).
So far as I can tell, this is identical to the sample code provided at the WWDC session, but I've been fussing with various syntaxes for an hour now and I can't figure out what the heck it wants.
Is there a SceneKit equivalent of the HoverEffectComponent used in RealityKit to highlight an entity as the user looks around a scene in a VisionOS app?
Hi Guys, I've been trying to put my model to react to light in visionOS Simulator by editing the component in Reality Composer Pro and also modifying it by code, but I can only put the shadow if I put it as an usdz file, it's not as reflective as when I see it on reality converter or reality composer pro, does someone have this problem too?
RealityView { content in
if let bigDonut = try? await ModelEntity(named: "bigdonut", in: realityKitContentBundle) {
print("LOADED")
// Create anchor for horizontal placement on a table
let anchor = AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: [0,0]))
// Configure scale and position
bigDonut.setScale([1,1,1], relativeTo: anchor)
bigDonut.setPosition([0,0.2,0], relativeTo: anchor)
// Add the anchor
content.add(anchor)
// Enable shadow casting but this does not work
bigDonut.components.set(GroundingShadowComponent(castsShadow: true))
}
}
Hi, I have a series of child entities in a USDZ file that I would like to be able to rotate relative to a joint point between one another. I believe the functionality I am looking for was available in SceneKit with SCNPhysicsHingeJoint. How would I go about replicating this functionality in RealityKit?
The current output of a rotation applied is relative to the model origin as a whole. (see below)
Thanks!
Freddie
Hi guys,
I thought I make a visionOS test app with the Apple's native robot.usdz file.
My plan was to rotate limbs of the robot programatically, but while I can see the bones in previous Xcode versions and in Blender, somehow I can not reach them in Xcode 15.3 or Reality Composer Pro.
Has anyone any experience with that?
Is it possible to animate some property on a RealityKit component? For example, the OpacityComponent has an opacity property that allows the opacity of the entities it's attached to, to be modified. I would like to animate the property so the entity fades in and out.
I've been looking at the animation API for RealityKit and it either assumes the animation is coming from a USDZ (which this is not), or it allows properties of entities themselves to be animated using a BindTarget. I'm not sure how either can be adapted to modify component properties?
Am I missing something?
Thanks
Hello all,
I am building for visionOS with another engineer and using Reality Composer Pro to validate usd files.
The starting position of my animated usdz, its position when it's first loaded, is not the same as the first frame of the animation on the usdz file
For testing, I am using the AR Quick Look asset 'toy_biplane_idle.usdz' which demonstrates the same 'error' we're currently getting with our own usdz files.
When the usdz is loaded, it is on the ground plane -
But when the aniamtion is played, the plane 'snaps' to the position of the first frame of the animation -
This 'snapping' behavior is giving us problems. We want the user ot see this plane in its static 'load' position with the option to play the animation. But we dont want it to snap when the user presses play
Is it possible to load the .usdz in the position specified by the first frame of the animation? What is the best way to fix this issue.
Thanks!
play a video in ImmersiveSpace, and how let ImmersiveSpace reflection the video
In RealityKit in visionOS 1.0 I'm perplexed that PhysicallyBasedMaterial and CustomMaterial have faceCulling properties but ShaderGraphMaterial does not.
Is there some way to achieve front face culling in a shader graph without creating a separate mesh with reversed triangle vertex indices?
Activity monitor reports that Reality Composer Pro uses 150% CPU and always is the number one energy user on my M3 mac. Unfortunately the high cpu usage continues when the app is hidden or minimized. I can understand the high usage when a scene is visible and when interacting with the scene, but this appears to be a bug. Can anyone else confirm this or have a workaround?
Can the scene processing at least be paused when app is hidden?
Or better yet, find out why the cpu usage is so high when the scene is not changing.
Reality Composer Pro Version 1.0 (409.60.6) on Sonoma 14.3
Thanks
Hi, I have a small question. Is it possible to place the entities from a reality view (Immersive space) at the eye level on Y axis? Is it enough to set the position to (x, 0 , z)?
in Diorama project,
let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle)
viewModel.rootEntity = entity
content.add(entity)
viewModel.updateScale()
// Offset the scene so it doesn't appear underneath the user or conflict with the main window.
entity.position = SIMD3<Float>(0, 0, -2)
Object doesn't move around with Camera - with the simulator workthrough wasd key
I can work around the object.
But with different composer file, that I created
let entity = try await Entity(named: "ImmersiveScene", in: realityKitContentBundle) {
viewModel.rootEntity = entity
content.add(entity)
viewModel.updateScale()
with wasd key in the simulator, model moves with it.
What confirugation that I'm missing with ImmersiveScene Entity?
Hello Apple community,
I am currently working with Object Capture and would appreciate some guidance on extracting specific data from the scans. I have successfully scanned objects, but I am now looking to obtain the point cloud and facial measurements from these scans.
I have used https://developer.apple.com/documentation/RealityKit/guided-capture-sample as a reference for implementation.
Point Cloud:
How can I extract the point cloud data from my Object Capture scans?
Are there any specific tools or methods recommended for this purpose?
Facial Measurements:
Is there a way to extract facial measurements accurately using Object Capture?
Are there any built-in features or third-party tools that can assist with this?
I've explored the documentation, but I would greatly benefit from any insights, tips, or recommended workflows from the community. Your expertise is highly appreciated!
Thank you in advance.
In my RealityKit-based app I was using DirectionalLightComponent and DirectionalLightComponent.Shadow to cast shadows.
As far as I can see, on visionOS only ImageBasedLightComponent is currently supported, so I transitioned from DirectionalLightComponent to ImageBasedLightComponent. The lighting is working fine, but I'm not able to cast shadows onto other entities (in my case, casting a shadow from a Moon onto a planet).
Looking at ImageBasedLightReceiverComponent, there's GroundingShadowComponent which isn't what I'm looking for.
Is there any way with ImageBasedLightComponent & ImageBasedLightReceiverComponent to cast shadows from an entity onto another entity?