The WWDC24 video "Build a spatial drawing app with RealityKit" https://developer.apple.com/wwdc24/10104 at 12:04 includes a slide showing a Reality Composer Pro shader graph that features wonderful inline documentation comment boxes:
Are shader graph inline comments a new feature that Reality Composer Pro supports? This would be extraordinarily useful, as complex shader graphs can be challenging to decipher.
If so, how are inline shader graph comments created in Reality Composer Pro?
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
I'd like to create meshes in RealityKit ( AR mode on iPad ) in screen-space, i.e. for UI.
I noticed a lot of useful new functionality in RealityKit for the next OS versions, including the OrthographicCameraComponent here:
https://developer.apple.com/documentation/realitykit/orthographiccameracomponent?changes=_3
I think this would help, but I need AR worldtracking as well as a regular perspective camera to work with the 3D elements.
Firstly, can I have a camera attached selectively to a few entities, just for those entities? This could be the orthographic camera.
Secondly, can I make it so those entities are always rendered in-front, in screenspace? (They'd need to follow the camera.)
If I can't have multiple cameras, what can be done in that case?
Is it actually better to use a completely different view / API for layering on-top of RealityKit? I would much rather keep everything in RealityKit, however, for simplicity.
I am trying to establish a workflow with using Reality Composer Pro to make scenes - I am grey boxing a scene using primitives at the moment.
I have set up a cube with a texture material and a simple animation to spin.
I am confused as to what I should be loading. I have created what I think is a scene asset in the package for the Reality Composer Project.
Here is a code snippet:
struct ContentView: View {
var body: some View {
RealityView { content in
do {
let scene = try await ModelEntity(named: "HOF")
content.add(scene)
} catch {
print("Error loading scene: \(error.localizedDescription)")
}
}
}
}
Here is the project layout in Reality Composer Pro:
import SwiftUI
import RealityKit
import ARKit
import AVFoundation
struct ContentView : View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.session.delegate = context.coordinator
let worldConfig = ARWorldTrackingConfiguration()
worldConfig.planeDetection = .horizontal
// worldConfig.providesAudioData = true // open here -----> Error:
arView.session.run(worldConfig)
addTestEntity(arView: arView)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
func makeCoordinator() -> Coordinator {
Coordinator()
}
class Coordinator: NSObject, ARSessionDelegate, ARSessionObserver {
func session(_ session: ARSession, didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer) {
}
}
}
func addTestEntity(arView: ARView) {
let mesh = MeshResource.generatePlane(width: 0.5, depth: 0.35)
guard let url = Bundle.main.url(forResource: "videoplayback", withExtension: "mp4") else { return }
let player = AVPlayer(url: url)
let videoMaterial = VideoMaterial(avPlayer: player)
let model = ModelEntity(mesh: mesh, materials: [videoMaterial])
model.transform.translation.y = 0.05
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
anchor.children.append(model)
player.play()
arView.scene.anchors.append(anchor)
}
Error:
failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change}
failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change}
......
ARSession <0x125d88040>: did fail with error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." UserInfo={NSLocalizedFailureReason=A sensor failed to deliver the required input., NSUnderlyingError=0x302922dc0 {Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}}, NSLocalizedRecoverySuggestion=Make sure that the application has the required privacy settings., NSLocalizedDescription=Required sensor failed.}
iOS 17.5.1
Xcode 15.4
We have been using attachment.bounds.extents to determine the size of a RealityView attachment at run time. It has been working fine until VisionOS 1.1 update. I wonder if we are doing something wrong as the release notes suggest some visual bounds calculation issue was fixed with the latest release. The funny thing is we did not have an issue before.
Below is how we access to height value:
let height = attachmentEntity.attachment.bounds.extents.y
Previously it returned the correct value. Now it returns 0.
I wonder if anyone else is having the same issue.
Hello guys, I do have a virtual environment in which I have a mesh. I want the mesh to be mirrored onto a glass which is very close nearby.
I can't just duplicate it because it varies depending on from which position you are looking at it.
Is there a possibility to mirror a mesh via reflections? It shouldn't reflect real world objects - just a virtual mesh.
Thank you guys
What is the current recommendation for creating high-quality 3D content?
The context is a hobbyist, specialised CAD app for macOS (with an iPadOS companion) that is mostly 2D but also offers a 3D visualization option (currently OpenGL).
Somewhere down the line there might be an AR view but at the moment - certainly for macOS - it's purely generated 3D visualization, all rendered content.
So starting with a rewrite of the 3D visualization in 2024 targeting macOS Sequoia/iPadOS 18 is RealityKit the suggested way forward?
Cheers,
Jay
I would think it would be common practice that when adding a new entity into your RealityView scene for them to appear in front of the user. And then the user places the entity in the scene. Image a puzzle piece appearing in front of you and you drag it to your puzzle board.
if you move around your puzzle board you’d expect that wherever you are the new piece should appear in front of you.
That seems applicable to a lot of applications.
I can add a new entity using the head anchor but as we all know that transform is the identity so reparenting the entity to something (eg puzzle board) won’t work.
I’ve been trying to use World positioning and query pose which helps but I’m stumped as to how to get the new entity to appear in front of me, no matter which way I turn.
Looking for suggestions and guidance on this.
On Scenekit, using SCNShapewe can create SCN geometry from SwiftUI 2D shapes/beziers:https://developer.apple.com/documentation/scenekit/scnshape
Is there an equivalent in RealityKit?
Could we use the generate(from:) for that?https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
I've created an app for visionOS that uses a custom package that includes RealityKitContent as well (as a sub-package). I now want to turn this app into a multi-platform app that also supports iOS.
When I try to compile the app for this platform, I get this error message:
Building for 'iphoneos', but realitytool only supports [xros, xrsimulator]
Thus, I want to exclude the RealityKitContent from my package for iOS, but I don't really know how. The Apple docs are pretty complicated, and ChatGPT did only give me solutions that did not work at all.
I also tried to post this on the Swift forum, but no-one could help me there either - so I am trying my luck here.
Here is my Package.swift file:
// swift-tools-version: 5.10
import PackageDescription
let package = Package(
name: "Overlays",
platforms: [
.iOS(.v17), .visionOS(.v1)
],
products: [
.library(
name: "Overlays",
targets: ["Overlays"]),
],
dependencies: [
.package(
path: "../BackendServices"
),
.package(
path: "../MeteorDDP"
),
.package(
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: ["BackendServices", "MeteorDDP", "OverlaysRealityKitContent"]
),
.testTarget(
name: "OverlaysTests",
dependencies: ["Overlays"]),
]
)
Based on a recommendation in the Swift forum, I also tried this:
dependencies: [
...
.package(
name: "OverlaysRealityKitContent",
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: [
"BackendServices", "MeteorDDP",
.product(name: "OverlaysRealityKitContent", package: "OverlaysRealityKitContent", condition: .when(platforms: [.visionOS]))
]
),
...
]
but this won't work either.
The problem seems to be that the package is listed under dependencies, which makes the realitytool kick in. Is there a way to avoid this? I definitely need the RealityKitContent package being part of the Overlay package, since the latter depends on the content (on visionOS). And I would not want to split the package up in two parts (one for iOS and one for visionOS), if possible.
In RealityKit in visionOS 1.0 I'm perplexed that PhysicallyBasedMaterial and CustomMaterial have faceCulling properties but ShaderGraphMaterial does not.
Is there some way to achieve front face culling in a shader graph without creating a separate mesh with reversed triangle vertex indices?
I was reading through cube-image node docs and it talked about loading data from a cubemap file in ktx format. It wasn’t clear if that was only for the original KTX format, and if that node was also able to take advantage of the KTX2 format?
Is this shader node only relevant for files in the original (v1) KTX format?
I am working on an app where we are attempting to place large entities quite far away from the user, when trying to recognise a tap gesture on them though the gesture isn't being picked up for part of the model.
It seems as though the larger and further a model is placed the more offset the collision shape seems to be. It responds to taps in a region that shrinks towards the bottom right. The actual size of the collision shape appears to be correct when viewed with the collision shape debug visualisation. I've been able to replicate this behaviour in the simulator and on a physical device.
It's hard to explain in words, there's a video in the README for the repo here
I've been able to replicate the issue in a simple sample app. Not sure if I might be using it wrong or if it is expected behaviour for tap gestures to be a bit off when places a large distance from the user. Appreciate any help, thanks.
struct ImmersiveView: View {
@State private var tapCount = 0
var body: some View {
RealityView { content in
let sphere = ModelEntity(mesh: .generateSphere(radius: 50), materials: [UnlitMaterial(color: .red)])
sphere.setPosition([500, 0, 0], relativeTo: nil)
sphere.components.set([
InputTargetComponent(),
CollisionComponent(shapes: [.generateBox(width: 250, height: 250, depth: 250)]),
])
content.add(sphere)
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
tapCount += 1
print(tapCount)
}
)
}
}
``
I am trying to create demo for spatial meeting using persona also refer apple videos, But not getting clear idea about it.
Any one could you please guide me step by step process or any code are appreciated for learning.
Hello,
In the documentation for an ARView we see a diagram that shows that all Entity's are connected via AnchorEntity's to the Scene:
https://developer.apple.com/documentation/realitykit/arview
What happens when we are using a RealityView? Here the documentation suggests we direclty add Entity's:
https://developer.apple.com/documentation/realitykit/realityview/
Three questions:
Do we need to add Entitys to an AnchorEntity first and add this via content.add(...)?
Is an Entity ignored by physics engine if attached via an Anchor?
If both the AnchorEntity and an attached Entity is added via content.add(...), does is its anchor's position ignored?
I cannot figure out how shareplay + spatial persona place the origin of RealityKit's coordinate system
I have an App on visionOS with immersiveSpace(.mix) scene. In the scene I am using ARKit to track my hand, creating a virtual object following the movement of my hand palm. Every frame I query positions from the HandAnchor to update the position of my object, using originFromAnchorTransform to correctly place my object in the scene.
However when I try to adopt that into a shareplay experience with spatial persona, the virtual object's position update becomes a mess. With different template (.sidebyside or .conversational), the origin in my space appears with no pattern. I can always see that the virtual object don't follow my hand but in a random place. it seems that there was a differnce/transform between handAnchor's origin and immersiveSpace origin under stpatial persona + shareplay mode. isn't it?
Or is there something I can try to use convert(displacement.inverse.rotation, from: .immersiveSpace, to: .scene) that mentioned here: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting and https://developer.apple.com/documentation/swiftui/environmentvalues/immersivespacedisplacement to do the translation and apply it on my virtual object. But not working yet. Can someone tells me how to do this correctly?
Does anyone have any guidance / experience using TriggerVolumes to detect collision rather than the Physics engine in Reality Kit.
Aside from not participating the physics engine are there any other downside or upsides to using them?
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}
I'm trying to detect when two entities collide. The following code shows a very basic set-up. How do I get the upperSphere to move? If I set its physicsBody.mode to .dynamic it moves with gravity (and the collision is reported), but when in kinematic mode it doesn't respond to the impulse:
struct CollisionView: View {
@State var subscriptions: [EventSubscription] = []
var body: some View {
RealityView { content in
let upperSphere = ModelEntity(mesh: .generateSphere(radius: 0.04))
let lowerSphere = ModelEntity(mesh: .generateSphere(radius: 0.04))
upperSphere.position = [0, 2, -2]
upperSphere.physicsBody = .init()
upperSphere.physicsBody?.mode = .kinematic
upperSphere.physicsMotion = PhysicsMotionComponent()
upperSphere.generateCollisionShapes(recursive: false)
lowerSphere.position = [0, 1, -2]
lowerSphere.physicsBody = .init()
lowerSphere.physicsBody?.mode = .static
lowerSphere.generateCollisionShapes(recursive: false)
let sub = content.subscribe(to: CollisionEvents.Began.self, on: nil) { _ in print("Collision!") }
subscriptions.append(sub)
content.add(upperSphere)
content.add(lowerSphere)
Task {
try? await Task.sleep(for: .seconds(2))
print("Impulse applied")
upperSphere.applyLinearImpulse([0, -1, 0], relativeTo: nil)
}
}
}
}
When developing a VisionPro application, I need to first move and then rotate the Entity in RealityView.
How can these two animations be executed sequentially? (I tested it and executing it simultaneously would result in incorrect animation positions)