I have a visionOS app using immersive space with RealityView. The app adds RealityKit entities to the app's Scene instance, and uses raycast to find CollisionCastHits.
I want now to write a unit test to check if the app finds the right hits.
To do so, I have to access the Scene instance to add entities, and to check if they are hit by scene.raycast.
But how can I access the scene instance?
I can access it e.g. after creating the RealityView via its content parameter, or via @Environment(\.realityKitScene). But this seems not to be possible in a unit test.
I tried the following test function:
@MainActor @Test func test() async throws {
var scene: RealityKit.Scene?
await withCheckedContinuation { continuation in
_ = RealityView(make: { content in
print("make")
let entity = Entity()
content.add(entity)
scene = entity.scene
continuation.resume()
})
}
#expect(scene != nil)
}
But this logs
◇ Test test() started.
SWIFT TASK CONTINUATION MISUSE: test() leaked its continuation!
The reason is apparently that the make closure of RealityView is only called when SwiftUI calls it within the body of a SwiftUI View.
So, is it possible at all to access the app's scene i a unit test?
Post
Replies
Boosts
Views
Activity
The set up:
I am developing a visionOS app that uses an immersive space.
The user sees a board with entities put onto it. My app places the board in front of the default camera and entities with a certain position and orientation relative to the board. Placement and rotation should be animated.
The problem:
If I place the entities by assigning a Transform to the transform property of the entity directly, i.e. without animation, the result is correct.
However I have to use the entity's move(to: function to animate it. And move(to: works in an unexpected way.
I thus wrote a little test app, based on Apple's visionOS immersive app template (below). There, the following 5 cases are treated:
Set transform directly (without animation). This gives the correct result, and works as expected (without animation).
Set transform using move relative to world (without animation). This gives the correct result, although it does not work as expected. I expected "relative to world" means translation and rotation is relativ to world. This seems wrong for translation and right for rotation.
Set transform using move relative to parentEntity (without animation). This gives a wrong result, although translation and rotation are defined relative to the parentEntity.
Set transform using move relative to world with animation. This gives also a wrong result, and without animation.
Set transform using move relative to parentEntity with animation. This gives also a wrong result, and without animation.
Here are the screen shots for the cases 1...5:
Cases 1 & 2
Case 3
Cases 4 & 5
The question:
So, obviously, I don't understand what move(to: does. I would be happy to get any advice what is wrong and how to do it right.
Here is the code:
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
let boardHeight: Float = 0.1
let boxHeight: Float = 0.3
var body: some View {
RealityView { content in
let boardEntity = makeBoard()
content.add(boardEntity)
let boxEntity = makeBox(parentEntity: boardEntity)
boardEntity.addChild(boxEntity)
}
}
func makeBoard() -> ModelEntity {
let mesh = MeshResource.generateBox(width: 1.0, height: boardHeight, depth: 1.0)
var material = UnlitMaterial(); material.color.tint = .red
let boardEntity = ModelEntity(mesh: mesh, materials: [material])
boardEntity.transform.translation = [0, 0, -3]
return boardEntity
}
func makeBox(parentEntity: Entity) -> ModelEntity {
let mesh = MeshResource.generateBox(width: 0.3, height: boxHeight, depth: 0.3)
var material = UnlitMaterial(); material.color.tint = .green
let boxEntity = ModelEntity(mesh: mesh, materials: [material])
// Set position and orientation of the box
// To put the box onto the board, move it up by half height of the board and half height of the box
let y_up = boardHeight/2.0 + boxHeight/2.0
let translation = SIMD3<Float>(0, y_up, 0)
// Turn the box by 45 degrees around the y axis
let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0))
let transform = Transform(rotation: rotationY, translation: translation)
// Do the actual move
// 1) Set transform directly (without animation)
boxEntity.transform = transform // Translation and rotation correct
// 2) Set transform using move relative to world (without animation)
// boxEntity.move(to: transform, relativeTo: nil) // Translation and rotation correct
// 3) Set transform using move relative to parentEntity (without animation)
// boxEntity.move(to: transform, relativeTo: parentEntity) // Translation incorrect, rotation correct
// 4) Set transform using move relative to world with animation
// boxEntity.move(to: transform,
// relativeTo: nil,
// duration: 1.0,
// timingFunction: .linear) // Translation incorrect, rotation incorrect, no animation
// 5) Set transform using move relative to parentEntity with animation
// boxEntity.move(to: transform,
// relativeTo: parentEntity,
// duration: 1.0,
// timingFunction: .linear) // 5) Translation incorrect, rotation incorrect, no animation
return boxEntity
}
}
I am developing an immersive visionOS app based on RealityKit and SwiftUI.
This app has ModelEntities that have a PerspectiveCamera entity as child.
I want to display the camera view in a 2D window in visionOS.
I am creating the camera, and add it to the entity with
let cameraEntity = PerspectiveCamera()
cameraEntity.camera.far = 10000
cameraEntity.camera.fieldOfViewInDegrees = 60
cameraEntity.camera.near = 0.01
entity.addChild(cameraEntity)
My app is not AR. The immersive view is programmatically generated.
In iOS, I could use an ARView with non AR camera mode. However, ARView is not available in visionOS.
How can I show the camera view in a SwiftUI 2D window in the immersive space?
My use case is the following:
Every user of my app can create as an owner a set of items.
These items are private until the owner invites other users to share all of them as participant.
The participants can modify the shared items and/or add other items.
So, sharing is not done related to individual items, but to all items of an owner.
I want to use CoreData & CloudKit to have local copies of private and shared items.
To my understanding, CoreData & CloudKit puts all mirrored items in a special zone „com.apple.coredata.cloudkit.zone“.
So, this zone should be shared, i.e. all items in it.
In the video it is said that NSPersistentCloudKitContainer uses Record Zone Sharing optionally in contrast to hierarchically record sharing using a root record.
But how is this done?
Maybe I can declare zone „com.apple.coredata.cloudkit.zone“ as a shared zone?