Hello
I was wondering if the keyboard awareness feature that came with visionOS 2 would also work for the Mac Book keyboard if someone is in an immersive .progressive custom environment such as the "Garden" environment from Construct an immersive environment for visionOS in e.g. an app I'm currently developing, to see one's keyboard. I haven't managed to achieve it so far.
Thank you very much in advance!
Post
Replies
Boosts
Views
Activity
Hello,
To me, it does not seem to be entirely clear why, when I'm trying to display my attachment, no matter the positioning, it will always be hidden/covered by my visionOS app window. I'm trying to achieve displaying the attachment one layer above/in front of the window. When my head isn't directed towards the window I can see the attachment but else it's covered by it.
I appreciate any help!
ContentView.swift
import SwiftUI
import RealityKit
struct ContentView: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
public var body: some View {
VStack {
Text("Hello World")
.font(.largeTitle)
Button("Start") {
Task {
await openImmersiveSpace(id: "AppSpace")
}
}
}
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
var loader: EnvironmentLoader
public var body: some View {
RealityView { content, attachments in
content.add(try! await loader.getEntity())
let headEntity = AnchorEntity(.head)
content.add(headEntity)
if let text = attachments.entity(for: "at01") {
text.position = [0, 0, -0.25]
headEntity.addChild(text)
}
}
attachments: {
Attachment(id: "at01") {
Text("Hello World!")
.font(.extraLargeTitle)
.padding()
}
}
}
}
App.swift
import SwiftUI
@main
private struct App: App {
@State var loader = EnvironmentLoader()
public var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "AppSpace") {
ImmersiveView(loader: loader)
}
.immersionStyle(selection: .constant(.progressive), in: .progressive)
}
}
Hello,
Would it be possible to use any of the available visionOS environments when I use an app that requires me to be in an immersive space? I'm developing an app where users can start the immersive space experience by pressing a button. In my case, it would be helpful if the user could still choose a visionOS environment using the Digital Crown, but currently, it seems to be unavailable after opening an immersive space.
Thank you very much in advance!
Hi,
I'm currently working on some messages that should appear in front of the user depending on the system's state of my visionOS app. How am I able to change the distance of the appearing message relative to the user if the message is displayed as a View. Or is this only possible if I would create an enitity for that message, and then set apply .setPosition() and .relativeTo() e.g. the head anchor? Currently I can change the x and y coordinates of the view as it works within a 2D space, but as I'm intending to display that view in my immersive space, it would be cool if I can display my message a little bit further away in the user's UI, as it currently is a little bit to close in the user's view. If there is a solution without the use of entities I would prefer that one.
Thank you for your help!
Below an example:
Feedback.swift
import SwiftUI
struct Feedback: View {
let message: String
var body: some View {
VStack {
Text(message)
}
}
.position(x: 0, y: -850) // how to adapt distance/depth relative to user in UI?
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State private var feedbackMessage = "Hello World"
public var body: some View {
VStack {}
.overlay(
Feedback(message: feedbackMessage)
)
RealityView { content in
let configuration = SpatialTrackingSession.Configuration(tracking: [.hand])
let spatialTrackingSession = SpatialTrackingSession.init()
_ = await spatialTrackingSession.run(configuration)
// Head
let headEntity = AnchorEntity(.head)
content.add(headEntity)
}
}
}
Hi,
I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing?
Thank you very much for your consideration!
Below is my code;
App.swift
import SwiftUI
@main
private struct TrackingApp: App {
public init() {
...
}
public var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "AppSpace") {
ImmersiveView()
}
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State private var subscriptions: [EventSubscription] = []
public var body: some View {
RealityView { content in
/* LEFT HAND */
let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: .
let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)])
leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere)
leftHandIndexFingerEntity.generateCollisionShapes(recursive: true)
leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)])
leftHandIndexFingerEntity.name = "LeftHandIndexFinger"
content.add(leftHandIndexFingerEntity)
/* 3D RECTANGLE*/
let width: Float = 0.7
let height: Float = 0.35
let depth: Float = 0.005
let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)])
rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])
let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5])
rectangleEntity.generateCollisionShapes(recursive: true)
rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])])
rectangleEntity.name = "Rectangle"
rectangleAnchor.addChild(rectangleEntity)
content.add(rectangleAnchor)
/* Collision Handling */
let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in
print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)")
}
subscriptions.append(subscription)
}
}
}
Hi,
I'm experimenting with how my visionOS app interacts with the Mac Virtual Display while the immersive space is active. Specifically, I'm trying to find out if my app can detect key presses or trackpad interactions (like clicks) when the Mac Virtual Display is in use for work, and my app is running in the background with an active immersive space.
So far, I've tested a head-tracking system in my app that works when the app is open with an active immersive space, where I just moved the Mac Virtual Display in front of the visionOS app window.
Could my visionOS app listen to keyboard and trackpad events that happen in the Mac Virtual Display environment?
Hi,
Is there a way to create an AnchorEntity that is attached to the window / WindowGroup of a visionOS app, so that there would be a box that aligns with the window?
Thanks for your help!
Hi,
I was wondering if there are any possibilities similar to when connecting the AVP to e.g. a MacBook, one could somehow implement that the Mac Screen/Content would be displayed within the window of the app after opening the immersive space.
Thank you very much in advance for your help!
Hello there,
I'm currently working on a Hand Tracking System. I've already placed some spheres on some joint points on the left and right hand. Now I want to access the translation/position value of these entities in the update(context: Scene) function. Now my question is, is it possible to access them via .handAnchors(), or which types of .handSkeleton.joint(name) are referencing the same entity? (E.g. is AnchorEntity(.hand(.right, location: .indexFingerTip)) the same as handSkeleton.joint(.indexFingerTip). The goal would be to access the translation of the joints where a sphere has been placed per hand and to be able to update the data every frame through the update(context) function.
I would very much appreciate any help!
See code example down below:
ImmersiveView.swift
import SwiftUI
import RealityKit
import ARKit
struct ImmersiveView: View {
public var body: some View {
RealityView { content in
/* HEAD */
let headEntity = AnchorEntity(.head)
content.add(headEntity)
/* LEFT HAND */
let leftHandWristEntity = AnchorEntity(.hand(.left, location: .wrist))
let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: .indexFingerTip))
let leftHandWristSphere = ModelEntity(mesh: .generateSphere(radius: 0.02), materials: [SimpleMaterial(color: .red, isMetallic: false)])
let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)])
leftHandWristEntity.addChild(leftHandWristSphere)
content.add(leftHandWristEntity)
leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere)
content.add(leftHandIndexFingerEntity)
}
}
}
TrackingSystem.swift
import SwiftUI
import simd
import ARKit
import RealityKit
public class TrackingSystem: System {
static let query = EntityQuery(where: .has(AnchoringComponent.self))
private let arKitSession = ARKitSession()
private let worldTrackingProvider = WorldTrackingProvider()
private let handTrackingProvider = HandTrackingProvider()
public required init(scene: RealityKit.Scene) {
setUpSession()
}
private func setUpSession() {
Task {
do {
try await arKitSession.run([worldTrackingProvider, handTrackingProvider])
} catch {
print("Error: \(error)")
}
}
}
public func update(context: SceneUpdateContext) {
guard worldTrackingProvider.state == .running && handTrackingProvider.state == .running else { return }
let _ = context.entities(matching: Self.query, updatingSystemWhen: .rendering)
if let avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: currentTime) {
let hands = handTrackingProvider.handAnchors(at: currentTime)
...
}
}
}
Hi,
Is there a way for researchers to get or request access to eye tracking data from the Apple Vision Pro within the boundaries of the app to be developed? I'm currently involved in research where access to eye tracking data would be beneficial (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)
Thank you for your answer.
Hi,
I was wondering during developing for visionOS why when I try to use queryDeviceAnchor() with WorldTrackingProvider() after opening the immersive space in the update(context: SceneUpdateContext) function, it initially seems to provide the DeviceAnchor data every frame but stops at some point (about 5-10 seconds after pressing the Button which opens the immersive space) and then stops updating constantly and only updates somehow randomly if I move my head abruptly to the left, right, etc. Somehow, the tracking doesn't seem to work as it should directly on the AVP device.
Any help would be greatly appreciated!
See my code down below:
ContentView.swift
import SwiftUI
struct ContentView: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
@Environment(\.scenePhase) private var scenePhase
var body: some View {
VStack {
Text("Head Tracking Prototype")
.font(.largeTitle)
Button("Start Head Tracking") {
Task {
await openImmersiveSpace(id: "appSpace")
}
}
}
.onChange(of: scenePhase) {_, newScenePhase in
switch newScenePhase {
case .active:
print("...")
case .inactive:
print("...")
case .background:
break
@unknown default:
print("...")
}
}
}
}
HeadTrackingApp.swift
import SwiftUI
@main
struct HeadTrackingApp: App {
init() {
HeadTrackingSystem.registerSystem()
}
var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "appSpace") {
}
}
}
HeadTrackingSystem.swift
import SwiftUI
import ARKit
import RealityKit
class HeadTrackingSystem: System {
let arKitSession = ARKitSession()
let worldTrackingProvider = WorldTrackingProvider()
required public init(scene: RealityKit.Scene) {
setUpSession()
}
func setUpSession() {
Task {
do {
try await arKitSession.run([worldTrackingProvider])
} catch {
print("Error: \(error)")
}
}
}
public func update(context: SceneUpdateContext) {
guard worldTrackingProvider.state == .running else { return }
let avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
print(avp!)
}
Hi guys,
I'm currently working on a Head Tracking application for visionOS and was wondering if there are any properties or ways to access the position of the app window in an immersive space? I was planning to somehow determine if the window is/is not within the AVP's orientation (through queryDeviceAnchor()) or "visible space". Or is there a way to access a property or data that tells me if the app window is within the user's AVP orientation or not if e.g. the user is turning around having the window behind the back?
I would be extremely thankful for any helpful input!
import SwiftUI
@main
struct HeadTrackingApp: App {
init() {
HeadTrackingSystem.registerSystem()
}
var body: some Scene {
WindowGroup { // Basically getting spatial coordinates of this
ContentView()
}
ImmersiveSpace(id: "appSpace") {
}
}
}
Hey guys,
I was wondering if anyone could help me. I'm currently trying to run an ARKitSession() with a WorldTrackingProvider() that makes use of DeviceAnchor. In the simulator everything seems to work fine and the WorldTrackingProvider runs, but if I'm trying to run the app on my AVP, the WorldTrackingProvider pauses after the initialization. I'm new to Apple development and I would be thankful for any helpful input!
Below my current code:
HeadTrackingApp.swift
import SwiftUI
@main
struct HeadTrackingApp: App {
init() {
HeadTrackingSystem.registerSystem()
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
ContentView.swift
import SwiftUI
struct ContentView: View {
var body: some View {
VStack {
Text("Head Tracking Prototype")
.font(.largeTitle)
}
}
}
HeadTrackingSystem.swift
import SwiftUI
import ARKit
import RealityKit
class HeadTrackingSystem: System {
let arKitSession = ARKitSession()
let worldTrackingProvider = WorldTrackingProvider()
var avp: DeviceAnchor?
required public init(scene: RealityKit.Scene) {
setUpSession()
}
func setUpSession() {
Task {
do {
print("Starting ARKit session...")
try await arKitSession.run([worldTrackingProvider])
print("Initial World Tracking Provider State: \(worldTrackingProvider.state)")
self.avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
if let avp = getAVPPositionOrientation() {
print("AVP data: \(avp)")
} else {
print("No AVP position and orientation available.")
}
} catch {
print("Error: \(error)")
}
}
}
func getAVPPositionOrientation() -> DeviceAnchor? {
return avp
}
}