Is there a SceneKit equivalent of the HoverEffectComponent used in RealityKit to highlight an entity as the user looks around a scene in a VisionOS app?
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
All of a sudden (like when XCode 15.2 left beta yesterday?) I can't build attachments into my RealityView:
var body: some View {
RealityView { content, attachments in
// stuff
} attachments: {
// stuff
}
Produces "No exact matches in call to initializer" on the declaration line (RealityView { content, attachments in).
So far as I can tell, this is identical to the sample code provided at the WWDC session, but I've been fussing with various syntaxes for an hour now and I can't figure out what the heck it wants.
How can we move the player within a RealityKit/RealityView scene? I am not looking for any animation or gradual movement, just instantaneous position changes.
I am unsure of how to access the player (the person wearing the headset) and it's transform within the context of a RealityView.
The goal is to allow the player to enter a full space in immersive mode and explore a space with various objects. They should be able to select an object and move closer to it.
In reality composer pro, when importing an USDZ model and inserting it into the scene, reality composer pro will remove the material of the model itself by default, but I don't want to do this. So how can reality composer pro not remove the material of the model itself?
I have a RealityView and I want to add an Entity with an Attachment.
Assuming I have a viewModel manage my entities, and the addEntityGesture() will add a new Entity under the rootEntity.
RealityView { content, attachments in
// Load initial content
content.add(viewModel.rootEntity)
} update: { updateContent, updateAttachments in
//
} attachments: {
//
}
.gesture(addEntityGesture())
I know that we can create attachment in the attachments closure, and add those attachments as entities in our make closure, however, what if I want to add entity with an attachment on the fly?
I have an immersive environment with a skybox which uses an png image inside a sphere. I added an IBL, but I am not sure what the best format / prep method is for the IBL image.
I have tried several different images for my IBL, and all are very different vibes from what I have in Blender.
My question is how can I create an IBL that's closest to Blender's Cycles rendering engine?
However, it's a rather difficult to answer question, so I want to ask some smaller questions first.
Does IBL need to be BW or will colour work?
From my tests: colour works just as well. But why does Apple only show use of BW ones? Should we use BW?
What is the best file format for IBL? Any pros/cons? Or should we just test out each format and check visually.
From my tests: PNG, OpenEXR (.exr), Radiance HDR (.hdr) all work. But which format is recommended?
Will IBL on visionOS create shadows for us? In Blender an HDRI gives shadows.
From my tests: No, IBL does not provide shadows on your loaded environment/meshes. Is "shadow baking" the only solution for the time being?
Looking at a scene in Blender which uses HDRI as global lighting, how can we best "prep" the IBL image that will give the closest light similar to Blender's Cycles rendering engine?
From my tests: I tried (as shown below)
A) make a render of just the Blender HDRI (without meshes) via 360-degree panoramic camera.
→ Usage as IBL makes everything too bright.
B) make a render of the entire Blender scene via 360-degree panoramic camera.
→ Usage as IBL makes everything washed out and yellowish.
C) Use the Sunlight.png from the sample project.
→ With this IBL the scene is too dark.
D) Use the SystemIBL.exr from another sample project.
→ With this IBL the scene looks very flat and not realistic at all.
Here I show each IBL I described above 1~4 and sample screenshots from the simulator:
A)
B)
C)
D)
The atmosphere I'm aiming for as per Blender's Cycles rendering engine:
Can anyone help me with my questions 1 ~ 4 above.
It would give me some insight in how to create immersive environments with realistic lighting & shadows. : )
Much appreciated!
— Luca
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that :
It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug.
2. So i test this node with a IF node,i found that it output is weird.
Below is zero should output,it is black
but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0)
I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.)
I've placed an AnchorEntity in my scene like so:
AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
In my RealityView update closure, I print out this entity's position relative to "nil" like so:
wallAnchor.position(relativeTo: nil)
Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away.
I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space.
How can I bridge the gap between these two?
WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity.
Thank you
Hi,
I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop.
Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus.
I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes"
Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
In new VisionOS platform, CustomMaterial in RealityKit can not userd, it should use ShaderGraphMaterial instead, but i can't find the way to change the culling mode. With old CustomMaterial, it has a facculling property,
Is there a way to change the culling mode in new ShaderGraphMaterial?
Hey guys
How I can fit RealityView content inside a volumetric window?
I have below simple example:
WindowGroup(id: "preview") {
RealityView { content in
if let entity = try? await Entity(named: "name") {
content.add(entity)
entity.setPosition(.zero, relativeTo: entity.parent)
}
}
}
.defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters)
.windowStyle(.volumetric)
I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded.
Can we achieve the same result using a RealityView?
Cheers
Why {videoPlayerComponent.playerScreenSize} cannot get right to the size of the video, print out the x and y is [0, 0], is this a BUG?
Hello I am very new here in the forum (in iOS dev as well). I am trying to build an app that uses 3d face filters and I want to use Reality Composer. I knew Xcode 15 did not have it so I downloaded the beta 8 version (as suggested in another post). This one actually has Reality Composure Pro (XCode -> Developer tools -> Reality Composure Pro) but the Experience.rcproject still does not appear. Is there a way to create one? When I use Reality Composure it seems only able to create standalone projects and it does not seem to be bundled in a any way to xCode. Thanks for your time people!
I know that CustomMaterial in RealityKit can update texture by use DrawableQueue, but in new VisionOS, CustomMaterial doesn't work anymore. How i can do the same thing,does ShaderGraphMaterial can do?I can't find example about how to do that. Looking forward your repley, thank you!
Hello,
I was wondering if we could get the metersPerUnit metadata from a USDZ loaded as an Entity (or even MDLAsset) using Reality Kit?
Thanks
I'm developing 3D Scanner works on iPad.
I'm using AVCapturePhoto and Photogrammetry Session.
photoCaptureDelegate is like below:
extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
try? fileData!.write(to: fileUrl, options: .atomic)
}
}
But, Photogrammetry session spits warning messages:
Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!
The session creates a usdz 3d model but scale is not correct.
I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
Hi,
I'm trying to use the ECS System class, and noticing on hardware, that update() is not being called every frame as its described in the Simulator, or as described in the documentation.
To reproduce, simply create a System like so:
class MySystem : System {
var count : Int = 0
required init(scene: Scene) {
}
func update(context: SceneUpdateContext) {
count = count + 1
print("Update \(count)")
}
}
Then, inside the RealityView, register the System with:
MySystem.registerSystem()
Notice that while it'll reliably be called every frame in Simulator, it'll run for a few seconds on hardware, freeze, then only be called when indirectly doing something like moving a window or performing other visionOS actions that are analogous to those that call "invalidate" to refresh a window in other operating systems.
Thanks in advance,
-Rob.
I am attempting to place images in wall anchors and be able to move their position using drag gestures. This seem pretty straightforward if the wall anchor is facing you when you start the app. But, if you place an image on a wall anchor to the left or the wall behind the original position then the logic stops working properly. The problem seems to be the anchor and the drag.location3D orientations don't coincide once you are dealing with wall anchors that are not facing the original user position (Using Xcode Beta 8)
Question:
How do I apply dragging gestures to an image regardless where the wall anchor is located at in relation to the user original facing direction?
Using the following code:
var dragGesture: some Gesture {
DragGesture(minimumDistance: 0)
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
let convertedPos = value.convert(value.location3D, from: .local, to: entity.parent!) * 0.1
entity.position = SIMD3<Float>(x: convertedPos.x, y: 0, z: convertedPos.y * (-1))
}
}
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view.
The RealityView
import SwiftUI
import RealityKit
import RealityKitContent
struct TheSphereOfDoomRV: View {
@StateObject var viewModel: SphereViewModel = SphereViewModel()
let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere")
var body: some View {
RealityView { content, attachments in
content.add(sphere)
} update: { content, attachments in
sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale)
} attachments: {
VStack {
Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center)
Button {
} label: {
Text("Play Video!")
}
}.tag("description")
}.modifier(GestureModifier()).environmentObject(viewModel)
}
}
SphereEntity:
import Foundation
import RealityKit
import RealityKitContent
class SphereEntity: Entity {
private let sphere: ModelEntity
@MainActor
required init() {
sphere = ModelEntity()
super.init()
}
init(radius: Float, materials: [Material], name: String) {
sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials)
sphere.generateCollisionShapes(recursive: false)
sphere.components.set(InputTargetComponent())
sphere.components.set(HoverEffectComponent())
sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)]))
sphere.name = name
super.init()
self.addChild(sphere)
self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ...
}
}
has anyone gotten their 3d Models to render in seperate windows, i tried following the code in the video for creating a seperate window group, but i get a ton of obsecure errors, i was able to get it to render in my 2d windows, but when i try making a seperate window group i get errors