I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players.
Treating EntityEquipment's state to DieState is not a way, because it doesn't support custom collision shapes.
I have also tried adding PhysicsBodyComponent and CollisionComponent to EntityEquipment's Entity. However, the main issue is that the position of EntityEquipment itself does not synchronize with the Entity's physics body, resulting in two separate instances of one object.
struct PlayerPawn: EntityEquipment {
let id: ID
let entity: Entity
var initialState: BaseEquipmentState
init(id: ID, entity: Entity) {
self.id = id
let massProperties = PhysicsMassProperties(mass: 1.0)
let material = PhysicsMaterialResource.generate(friction: 0.5, restitution: 0.5)
let shape = ShapeResource.generateBox(size: [0.4, 0.2, 0.2])
let physicsBody = PhysicsBodyComponent(massProperties: massProperties, material: material, mode: .dynamic)
let collisionComponent = CollisionComponent(shapes: [shape])
entity.components.set(physicsBody)
entity.components.set(collisionComponent)
self.entity = entity
initialState = .init(parentID: .tableID, pose: .init(position: .init(), rotation: .zero), entity: self.entity)
}
}
I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture.
The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values.
I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem.
My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values?
Project Link: Private
STEPS TO REPRODUCE
Run the app.
Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem).
Tap the button in the timeline to create a new project.
Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1).
There should now be a green 3d model added to the stage.
Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage).
Re-run the app to see that the green element is offset to a new location, not the last dragged location.
To reset and try again, delete the project canvas next to the project name (trash button) then restart the app.
Areas of concern:
RealityKitView is the only file you may need.
Line 119 is where we create new child entities
Lines 185-219 is where we persist and apply persisted values.
You can also search FIXME in the file to see areas of concern.
Tip:
I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples.
I noticed when calling
audioPlayerNode.scheduleBuffer(audioBuffer)
The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot.
However, if I call
audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts)
The memory leak issue is gone, but the audio is broken (sounds like been shortened).
Below is the full code snippet, anyone knows how to fix it?
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// callback every frame
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
// memory leak here
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The source audio buffer has 2 channels and is in a Float32 interleaved format.
However, when setting up the AVAudioFormat with interleaved to true, the app will crash with a memory issue:
AURemoteIO::IOThread (35): EXC_BAD_ACCESS (code=1, address=0x3)
But if I set AVAudioFormat with interleaved to false, and manually set up the AVAudioPCMBuffer, it can play audio as expected.
Could you please help me fix it? Below is the code snippet.
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// This crashes
private func audioFrameCallback_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: true),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData?[0] {
data.update(from: buf, count: samples * Int(format.channelCount))
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
/// This works
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body.
I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this?
void CalculateLengthInsideOrgan()
{
// Direction from the base of the probe to the tip
Vector3 direction = probeTip.position - probeBase.position;
float probeLength = direction.magnitude;
// Raycasting
RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask);
if (hits.Length > 0)
{
// Calculate the length entering the organ
float distanceToFirstHit = hits[0].distance;
lengthInsideOrgan = probeLength - distanceToFirstHit;
}
else
{
lengthInsideOrgan = 0f;
}
}
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low.
On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS.
Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data?
We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
Hello,
Is there a way to always have the attachments of a RealityView always face the user?
For example, in a visionOS app, in an immersive space, we have an attachment. When the user either walks around the attachment, or rotates the parent entity, we would like the attachment to automatically rotate to face the user.
How do we do this?
I anticipated this to be a trivial feature to implement, since I thought I remembered seeing this feature as a built-in/opt-in option for attachments. But, I cannot find that feature.
All and any recommendations are appreciated, thanks.
From visionOS 2.0 we can access Apple Vision Pro's main camera but only for Enterprise account as it is enterprise API only, I have a normal Developer account and I want to use main camera and want to have a video call feature in app by using main camera of AVP, is it possible to do it using developer account only. Currently using that account I am not able to create entitlement certificate as there is no option.
Hello everyone,
I’m currently developing my first VisionOS app in Xcode, starting with the default "Hello World" code provided when creating a new VisionOS Mixed Reality App. However, I’m facing some issues with performance and previewing that I can’t seem to resolve.
When I load the preview, it takes an extremely long time, and sometimes it doesn’t load at all. Even when I try to run the app in the VisionOS Simulator, the simulator shows an endless black screen and never displays the intended view. I’ve made no changes to the code, so it’s purely the base setup.
Here are my system details:
Xcode version: 16.1, VisionOs 2.0;
macOS version: 15.0.1;
Hardware: MacBook Air 2020 M1
I’ve tried restarting Xcode and my machine, but the issue persists. Has anyone else faced similar problems or have any suggestions for fixing this? Or is my hardware simply too weak? Any help would be greatly appreciated!
Thank you in advance!
Hi!
I read this page, about mirroring Vision Pro to another device.
Mirror your Apple Vision Pro to another device
I wanna know if it's possible mirroring Vision Pro to other Vision Pros with this way. (showing 2D mirroring screen like video play back or spatial video play back) Or are there other ways?
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera.
So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user?
This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem?
STEPS TO REPRODUCE
Obtain video frames,
Process the obtained video frames
Insert the processed video frames into the VisonOS system camera.
System: VisionOS 2.0
API used: Enterprise APIs Main camera access permissions
I'm trying to buy ASA for my vision pro only app, but when I enter my app, it says cannot find my app. When I click on "Can't find your app?", it brings me to a page that says "To set up an Apple Search Ads account, you must have an app for iPhone or iPad currently on the App Store". So wondering is ASA available for visionOS?
How to find main (left) camera transform from world anchor? (Enterprise API)
From CameraFrameProvider() I can get a frame sample which has an "extrinsics" parameter. How is it defined? Relative to what point/anchor?
Hi!
I'm trying to play video on monitor 3D model like a material.
I wanna know if it's possible work. I searched about it, but I couldn't get enough information. Thank you in advance.
Hi!
I'm using timeline in Reality Composer Pro. I tried to enable entities that were disabled at the beginning of the scene to be enabled in the middle of the Timeline playback using the 'Enable Entities'. But it didn't work well as I imagined. (It was keep appearing before starting Timeline)
How do I solve this problem? Are there good solutions about it?
Hi!
I wanna know if it's possible mirroring a Vision Pro to other Vision Pros.
If it's possible, how do I work on it? Can I get some hints?
The method of taking screenshots in IOS can be done through the "view. layer. render (in: UIGraphicsGetCurrentContext()!)" method. What should be replaced with "view. layer" in VisionOS to call the ". render (in: UIGraphicsGetCurrentContext()!)" method??
I tried using the GameController APIs for this, but they didn't seem to work. Is that the recommended API for handling keyboard/mouse? The notifications for mouse and keyboard connect/disconnect don't seem to be defined for visionOS.
The visionOS 2.0 touts keyboard and mouse support. The simulator can even forward keyboard/mouse to the app. But there don't seem to be any sample code of how to programatically receive either of these. The game controller works fine (on device, not on Simulator).