In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information.
.gesture(
SpatialTapGesture(
coordinateSpace: .immersiveSpace
)
.targetedToAnyEntity()
.onEnded { value in
handleTapOnFloor(value: value)
}
)
My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape?
I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features.
I would like to be able
Detect the floor plane
Get the position/transform of the floor plane
Add a collider to the floor plane
Enable collisions and physics on the floor plane
Enable gestures on the floor plane
It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use.
I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor.
Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected?
struct FloorExample: View {
@State var trackingSession: SpatialTrackingSession = SpatialTrackingSession()
@State var subject: Entity?
@State var floor: AnchorEntity?
var body: some View {
RealityView { content, attachments in
let session = SpatialTrackingSession()
let configuration = SpatialTrackingSession.Configuration(tracking: [.plane])
_ = await session.run(configuration)
self.trackingSession = session
let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1)))
floorAnchor.anchoring.physicsSimulation = .none
floorAnchor.name = "FloorAnchorEntity"
floorAnchor.components.set(InputTargetComponent())
floorAnchor.components.set(CollisionComponent(shapes: .init()))
content.add(floorAnchor)
self.floor = floorAnchor
// This is just here to let me see where visinoOS decided to "place" the floor anchor.
let floorPlaced = ModelEntity(
mesh: .generateSphere(radius: 0.1),
materials: [SimpleMaterial(color: .black, isMetallic: false)])
floorAnchor.addChild(floorPlaced)
if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) {
content.add(scene)
if let subject = scene.findEntity(named: "StepSphereRed") {
self.subject = subject
}
// I can see when the anchor is added
_ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in
event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work
print("**anchor changed** \(event)")
print("**anchor** \(event.anchor)")
}
// place the reset button near the user
if let panel = attachments.entity(for: "Panel") {
panel.position = [0, 1, -0.5]
content.add(panel)
}
}
} update: { content, attachments in
} attachments: {
Attachment(id: "Panel", {
Button(action: {
print("**button pressed**")
if let subject = self.subject {
subject.position = [-0.5, 1.5, -1.5]
// Remove the physics body and assign a new one - hack to remove momentum
if let physics = subject.components[PhysicsBodyComponent.self] {
subject.components.remove(PhysicsBodyComponent.self)
subject.components.set(physics)
}
}
}, label: {
Text("Reset Sphere")
})
})
}
}
}
Post
Replies
Boosts
Views
Activity
Reality Composer Pro question related to custom components
My custom component defines some properties to edit in RCP. Simple ones work find, but SIMD3 and SIMD2 do not. I'd expect to see default values but instead I get this 0s. If I try to run this the scene doesn't load. Once I enter some values it does and build and run again it works fine.
More generally, does Apple have documentation on creating properties for components? The only examples I've seen show simple strings and floats. There are no details about vectors, conditional options, grouping properties, etc.
public struct EntitySpawnerComponent: Component, Codable {
public enum SpawnShape: String, Codable {
case domeUpper
case domeLower
case sphere
case box
case plane
case circle
}
// These prooerties get their default values in RCP
/// The number of clones to create
public var Copies: Int = 12
/// The shape to spawn entities in
public var SpawnShape: SpawnShape = .domeUpper
/// Radius for spherical shapes (dome, sphere, circle)
public var Radius: Float = 5.0
// These properties DO NOT get their default values in RCP. The all show 0
/// Dimensions for box spawning (width, height, depth)
public var BoxDimensions: SIMD3<Float> = SIMD3(2.0, 2.0, 2.0)
/// Dimensions for plane spawning (width, depth)
public var PlaneDimensions: SIMD2<Float> = SIMD2(2.0, 2.0)
/// Track if we've already spawned copies
public var HasSpawned: Bool = false
public init() {
}
}
I have some questions about Apple privacy manifest.
I have a visionOS app called Project Graveyard. I'm getting ready for the visionOS 2 release. Since my last update Apple has started requiring privacy manifest files, but the documentation is extremely vague and I can't tell if I actually need one or not.
My app stores data two types of data for the user.
User Defaults - App settings: lights, rain, window placement etc.
SwiftData + CloudKit - User generated data: a list of project names and some optional text. User customization options for each item.
The data is stored on device or in CloudKit. I do not "collect" this data, it is simply there for the app to function. Do I need a privacy manifest for this type of data? If so, what do I "declare".
SwiftUI in visionOS has a modifier called preferredSurroundingsEffect that takes a SurroundingsEffect.
From what I can tell, there is only a single effect available: .systemDark.
ImmersiveSpace(id: "MyView") {
MyView()
.preferredSurroundingsEffect(.systemDark)
}
I'd like to create another effect to tint the color of passthrough video if possible.
Does anyone know how to create custom SurroundingsEffects?
In a RealityView, I have scene loaded from Reality Composer Pro. The entity I'm interacting with has a PhysicallyBasedMaterial with a diffuse color. I want to change that color when on long press. I can get the entity and even get a reference to the material, but I can't seem to change anything about it.
What is the best way to change the color of a material at runtime?
var longPress: some Gesture {
LongPressGesture(minimumDuration: 0.5)
.targetedToAnyEntity()
.onEnded { value in
value.entity.position.y = value.entity.position.y + 0.01
if var shadow = value.entity.components[GroundingShadowComponent.self] {
shadow.castsShadow = true
value.entity.components.set(shadow)
}
if let model = value.entity.components[ModelComponent.self] {
print("material", model)
if let mat = model.materials.first {
print("material", mat)
// I have a material here but I can't set any properties?
// mat.diffuseColor does not exist
}
}
}
}
Here is the full code
struct Lab5026: View {
var body: some View {
RealityView { content in
if let root = try? await Entity(named: "GestureLab", in: realityKitContentBundle) {
root.position = [0, -0.45, 0]
if let subject = root.findEntity(named: "Cube") {
subject.components.set(HoverEffectComponent())
subject.components.set(GroundingShadowComponent(castsShadow: false))
}
content.add(root)
}
}
.gesture(longPress.sequenced(before: dragGesture))
}
var longPress: some Gesture {
LongPressGesture(minimumDuration: 0.5)
.targetedToAnyEntity()
.onEnded { value in
value.entity.position.y = value.entity.position.y + 0.01
if var shadow = value.entity.components[GroundingShadowComponent.self] {
shadow.castsShadow = true
value.entity.components.set(shadow)
}
if let model = value.entity.components[ModelComponent.self] {
print("material", model)
if let mat = model.materials.first {
print("material", mat)
// I have a material here but I can't set any properties?
// mat.diffuseColor does not exist
// PhysicallyBasedMaterial
}
}
}
}
var dragGesture: some Gesture {
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let newPostion = value.convert(value.location3D, from: .global, to: value.entity.parent!)
let limit: Float = 0.175
value.entity.position.x = min(max(newPostion.x, -limit), limit)
value.entity.position.z = min(max(newPostion.z, -limit), limit)
}
.onEnded { value in
value.entity.position.y = value.entity.position.y - 0.01
if var shadow = value.entity.components[GroundingShadowComponent.self] {
shadow.castsShadow = false
value.entity.components.set(shadow)
}
}
}
}
How can we move the player within a RealityKit/RealityView scene? I am not looking for any animation or gradual movement, just instantaneous position changes.
I am unsure of how to access the player (the person wearing the headset) and it's transform within the context of a RealityView.
The goal is to allow the player to enter a full space in immersive mode and explore a space with various objects. They should be able to select an object and move closer to it.