I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer.
Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession.
Also, the issue is more noticable on real device than the simulator
//
// IssueApp.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
@main
struct IssueApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
if let sphere = scene.findEntity(named: "Sphere") {
sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05))
}
if let button = attachments.entity(for: "Button") {
button.position.y -= 0.3
scene.addChild(button)
}
content.add(scene)
}
} attachments: {
Attachment(id: "Button") {
VStack {
Button {
SoundManager.instance.playSound(filePath: "apple_en")
} label: {
Text("Play audio")
}
.animation(.none, value: 0)
.fontWeight(.semibold)
}
.padding()
.glassBackgroundEffect()
}
}
.onAppear {
UpAndDownSystem.registerSystem()
}
}
}
//
// SoundManager.swift
// LinguaBubble
//
// Created by Zhendong Chen on 1/14/25.
//
import Foundation
import AVFoundation
class SoundManager {
static let instance = SoundManager()
private var audioPlayer: AVAudioPlayer?
func playSound(filePath: String) {
guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return }
do {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.play()
} catch let error {
print("Error playing sound. \(error.localizedDescription)")
}
}
}
//
// UpAndDownComponent+System.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import RealityKit
struct UpAndDownComponent: Component {
var speed: Float
var axis: SIMD3<Float>
var minY: Float
var maxY: Float
var direction: Float = 1.0 // 1 for up, -1 for down
var initialY: Float?
init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) {
self.speed = speed
self.axis = axis
self.minY = minY
self.maxY = maxY
}
}
struct UpAndDownSystem: System {
static let query = EntityQuery(where: .has(UpAndDownComponent.self))
init(scene: RealityKit.Scene) {}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime) // Time between frames
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue }
// Ensure we have the initial Y value set
if component.initialY == nil {
component.initialY = entity.transform.translation.y
}
// Calculate the current position
let currentY = entity.transform.translation.y
// Move the entity up or down
let newY = currentY + (component.speed * component.direction * deltaTime)
// If the entity moves out of the allowed range, reverse the direction
if newY >= component.initialY! + component.maxY {
component.direction = -1.0 // Move down
} else if newY <= component.initialY! + component.minY {
component.direction = 1.0 // Move up
}
// Apply the new position
entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z)
// Update the component with the new direction
entity.components[UpAndDownComponent.self] = component
}
}
}
Could someone help me with this?
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Currently I only see the right eye view when running my test app on the Vision Simulator. But to be able to evaluate if what I'm doing is actually possible on the device before buying one for developing my app I like to be able to switch between viewing the right eye and left eye in the simulator.
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic.
Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments.
So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere.
OS/Sys: VisionOS 2.3/XCode 16.3
Hello!
I have a simple app that opens a sheet and when you press a button on the sheet it will open a quick look preview of a picture. That works great but when I exit the quick look preview it will close the sheet too. This seems like unexpected behavior because it doesn't happen on iOS.
Any help is appreciated, thank you.
Here is some simple repo:
import QuickLook
import SwiftUI
struct ContentView: View {
@State private var pictureURL: URL?
@State private var openSheet = false
var body: some View {
Button("Open Sheet") {
openSheet = true
}
.sheet(isPresented: $openSheet) {
Button("Open Picture") {
pictureURL = URL(fileURLWithPath: "someImagePath")
}
// When quick look closes it will close the sheet too.
.quickLookPreview($pictureURL)
}
}
}
And here is a quick video:
Hey! I'm facing an issue with Equipment collision when adding and moving TabletopKit equipment with different pose rotations.
Let me share a very simple TabletopKit setup as an example:
Table
struct Table: Tabletop {
var shape: TabletopShape = .rectangular(width: 1, height: 1, thickness: 0.01)
var id: EquipmentIdentifier = .tableID
}
Board
struct Board: Equipment {
let id: EquipmentIdentifier = .boardID
var initialState: BaseEquipmentState {
.init(
parentID: .tableID,
seatControl: .restricted([]),
pose: .init(position: .init(), rotation: .zero),
boundingBox: .init(center: .zero, size: .init(1.0, 0, 1.0))
)
}
}
Equipment
struct Object: EntityEquipment {
var id: ID
var size: SIMD2<Float>
var position: SIMD2<Double>
var rotation: Float
var entity: Entity
var initialState: BaseEquipmentState
init(id: Int, size: SIMD2<Float>, position: SIMD2<Double>, rotation: Float) {
self.id = EquipmentIdentifier(id)
self.size = size
self.position = position
self.rotation = rotation
self.entity = objectEntity
self.initialState = .init(
parentID: .boardID,
seatControl: .any,
pose: .init(
position: .init(x: position.x, z: position.y),
rotation: .degrees(Double(rotation))
),
entity: entity
)
}
}
Setup
class GameSetup {
var setup: TableSetup
init(root: Entity) {
setup = TableSetup(tabletop: Table())
setup.add(equipment: Board())
setup.add(seat: PlayerSeat())
let object1 = Object(
id: 2,
size: .init(x: 0.1, y: 0.1),
position: .init(x: 0.1, y: -0.1),
rotation: 0
)
let object2 = Object(
id: 3,
size: .init(x: 0.2, y: 0.1),
position: .init(x: -0.1, y: -0.1),
rotation: 90
)
setup.add(equipment: object1)
setup.add(equipment: object2)
}
}
The issue
When I add two equipment entities with different rotation poses, the collisions between them behave oddly. If one is 90º and the other 0º, for example, the former will intersect with the latter as if its bounding box was not rotated as you can see below:
But if both equipment have the example rotation (e.g. 0 or 90º), though, then there's no collision issue at all, which seems to indicate their bounding box were correctly rotated:
I'd really appreciate some help understanding if this is a bug or if I'm just missing something.
Thanks in advance!
Create an Empty visionOS App like this.
starts in windowed mode, when I enter immersive mode and then exit back to windowed mode, I notice that the window appears dimmer. I start a simple project with settings as image shown below, and took screenShots of my window before and after entering immersive space then quit, compare them, the color value did become dimmer. The issue is reliably repeatable in a given room. If this issue is experienced, adjusting the display brightness to the maximum value and then back to the initial setting will restore the colors to the correct state. Force to exit the app then reopen it can do the same restoration.https://drive.google.com/file/d/1m-a4ghNlSkHhAQuvOCF_IAfcdYeJA14j/view?usp=sharing
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below.
import SwiftUI
import RealityKit
import ARKit
import Combine
struct ImmersiveView: View {
@State private var arkitSession = ARKitSession()
@State private var root = Entity()
@State private var fadeCompleteSubscriptions: Set = []
var body: some View {
RealityView { content in
content.add(root)
}
.task {
// Check if barcode detection is supported; otherwise handle this case.
guard BarcodeDetectionProvider.isSupported else { return }
// Specify the symbologies you want to detect.
let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8])
do {
try await arkitSession.requestAuthorization(for: [.worldSensing])
try await arkitSession.run([barcodeDetection])
print("Barcode scanning started")
for await update in barcodeDetection.anchorUpdates where update.event == .added {
let anchor = update.anchor
// Play an animation to indicate the system detected a barcode.
playAnimation(for: anchor)
// Use the anchor's decoded contents and symbology to take action.
print(
"""
Payload: \(anchor.payloadString ?? "")
Symbology: \(anchor.symbology)
""")
}
} catch {
// Handle the error.
print(error)
}
}
}
// Define this function in ImmersiveView.
func playAnimation(for anchor: BarcodeAnchor) {
guard let scene = root.scene else { return }
// Create a plane sized to match the barcode.
let extent = anchor.extent
let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)])
entity.components.set(OpacityComponent(opacity: 0))
// Position the plane over the barcode.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
root.addChild(entity)
// Fade the plane in and out.
do {
let duration = 0.5
let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 0,
to: 1.0,
duration: duration,
isAdditive: true,
bindTarget: .opacity)
)
let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 1.0,
to: 0,
duration: duration,
isAdditive: true,
bindTarget: .opacity))
let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut])
_ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in
// Remove the plane after the animation completes.
entity.removeFromParent()
}).store(in: &fadeCompleteSubscriptions)
entity.playAnimation(fadeAnimation)
} catch {
print("Error")
}
}
}
In visionOS, there are existing modifiers that can completely conceal the hands. However, I am interested in learning how to achieve the effect of only one hand disappearing while the other hand remains visible.
.upperLimbVisibility(.hidden)
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows:
And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird).
Is there something I am missing?
PLATFORM AND VERSION
Vision OS
Development environment: Xcode 16.2, macOS 15.2
Run-time configuration: visionOS 2.3 (On Real Device, Not simulator)
Please someone confirm I'm not crazy and this issue is actually out of my control.
Spent hours trying to fix my app and running profiles because thought it was an issue related to my apps performance. Finally considered chance it was issue with API itself and made sample app to isolate problem, and it still existed in it. The issue is when a model entity moves around in a full space that was launched when the system environment immersion was turned up before opening it, the entities looks very choppy as they move around. If you take off the headset while still in the space, and put it back on, this fixes it and then they move smoothly as they should. In addition, you can also leave the space, and then turn the system environment immersion all the way down before launching the full space again, this will also make the entity moves smoothly as it should. If you launch a mixed immersion style instead of a full immersion style, this issue never arrises. The issue only arrises if you launch the space with either a full style, or progressive style, while the system immersion level is turned on.
STEPS TO REPRODUCE
https://github.com/nathan-707/ChoppyEntitySample
Open my test project, its a small, modified vision os project template that shows it clearly.
otherwise:
create immersive space with either full or progressive immersion style.
setup a entity in kinematic mode, apply a velocity to it to make it pass over your head when the space appears.
if you opened the space while the Apple Vision Pros system environment was turned up, the entity will look choppy.
if you take the headset off while in the same space, and put it back on, it will fix the issue and it will look smooth.
alternatively if you open the space with the system immersion environment all the way down, you will also not run into the issue. Again, issue also does not happen if space launched is in mixed style.
Hi,
On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations.
What I want to know is if it is possible to constrain the rotation on axis automatically.
Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on.
This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes.
RotateGesture3D(constrainedToAxis: .x)
RotateGesture3D(constrainedToAxis: .y)
RotateGesture3D(constrainedToAxis: .z)
Is it possible to do this?
If so, what would be the best way to do it?
A code example would be greatly appreciated.
Regards
Tof
Apologies that this is probably a simple problem.
I've started from a sample code provided by Apple and changed it quite significantly. However, I'm not able to Archive the app.
The original visionOS sample code has the same issue, so hopefully someone will be able to spot the problem:
https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos
The problems shown in the log are:
Undefined symbol: _main
Linker command failed with exit code 1 (use -v to see invocation)
The first error seems to say that there's no "main" but there is indeed a @main in the EntryPoint.swift file.
Any ideas? I have archived other apps (built from scratch) successfully, but clearly there's something different about this sample code.
Many thanks!
Hi,
We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system.
But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc).
Is this planned in Apple's roadmap ?
Thanks
In my visionOS app, which starts in windowed mode, when I enter immersive mode and then exit back to windowed mode, I notice that the window appears dimmer. I start a simple project with settings as image shown below, and took screenShots of my window before and after entering immersive space then quit, compare them, the color value did become dimmer. How can I fix this issue? Or operations I may miss leading to this situation?
In visionOS, once an immersive space is opened, the background color is solid black, is it possible to make this background transparent?
FYI, The Immersive spaces on visionOS uses Compositor Services for drawing 3D content.
I want to create a screenshot (static image) of the current view on the Apple Vision Pro using written code in visionOS. Unfortunately, I currently can’t find a way to achieve this. The only option I’ve found so far is through Reality Composer Pro. However, since I want to accomplish this directly through code, this approach is not an option for me.
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap".
Behaviors: "on added to scene". action - timeline
Input target: check mark enabled, allowed all
Collision set to default
Audio library: source mp3 file
Chanel Audio: resource mp3 file above
Timeline: Play Audio with mp3 file added
This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
Hi everyone,
I've been exploring an idea that involves using virtual light sources in VisionOS/RealityKit to interact with real-world objects. Specifically, I'd like to simulate a scenario where a virtual spotlight or other light source casts light or shadows onto real-world environments, creating the effect of virtual lighting interacting with physical surroundings. Is this currently feasible within VisionOS/RealityKit?
Thank you!
Hi,
In a visionOS application I have an entity that is displayed. I can give it a certain velocity by making it collide with another entity.
I would also like to be able to drag the entity and give it a certain velocity via the drag.
I searched in the project examples and found nothing. I also searched on the Internet without finding anything clear on the subject.
Looking at the drag gesture information I found gestureValue.velocity but I have no idea how to use this property. I'm not even sure if this property is useful to me because it's a CGSize so, a priori, not intended for a 3D gesture.
If you have any information that will help me implement what I am trying to do I would be grateful. 🙏🏻
DragGesture()
.targetedToAnyEntity()
.onChanged {
pValue in
// Some code
}
.onEnded {
pValue in
//pValue.gestureValue.velocity
}
Hi,
I am trying to bring an existing Unity app to vision pro, and am trying to make all of the librairies compatible (the project loads native libs at runtime).
For some of them, there is an arm64 IOS .framework file that seems to build and be found easily in the device, but for one of them I only got a .dylib.
When building on xcode, it tells me it can't find it. So I added it to the lib search path in build settings, and it built. But on the device, it still can't seem to find the .dylib :
Library not loaded: ./libpdfium.dylib
Referenced from: <59B1ACCC-FFFD-3448-B03D-69AE95604C77> /private/var/containers/Bundle/Application/0606D884-CB09-44CA-8E4F-4A309D2E7053/[...].app/Frameworks/UnityFramework.framework/UnityFramework
Reason: tried: '/usr/lib/system/introspection/libpdfium.dylib' (no such file, not in dyld cache), './libpdfium.dylib' (no such file), '/usr/lib/system/introspection/libpdfium.dylib' (no such file, not in dyld cache), '//libpdfium.dylib' (no such file)
I am not used to Apple environment, is there a way to correctly reference this .dylib (not talking about compatibility here, just the first "lib found" step) ?
Thanks.