Rendering the scene onto a RenderTarget with twice the resolution of the Drawable, and then downsampling to the Drawable, causes the image to appear distorted.
Modifications were made on the Xcode VisionOS template
Foveation should be enabled by default
struct ContentStageConfiguration: CompositorLayerConfiguration {
func makeConfiguration(capabilities: LayerRenderer.Capabilities, configuration: inout LayerRenderer.Configuration) {
configuration.depthFormat = .depth32Float
configuration.colorFormat = .bgra8Unorm_srgb
let foveationEnabled = capabilities.supportsFoveation
configuration.isFoveationEnabled = foveationEnabled
let options: LayerRenderer.Capabilities.SupportedLayoutsOptions = foveationEnabled ? [.foveationEnabled] : []
let supportedLayouts = capabilities.supportedLayouts(options: options)
configuration.layout = supportedLayouts.contains(.layered) ? .layered : .dedicated
}
}
To avoid errors, rasterizationRateMap is not set.
var renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = self.renderTarget.currentFrameColor
renderPassDescriptor.renderTargetWidth = self.renderTarget.currentFrameColor.width
renderPassDescriptor.renderTargetHeight = self.renderTarget.currentFrameColor.height
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
renderPassDescriptor.depthAttachment.texture = self.renderTarget.currentFrameDepth
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .store
renderPassDescriptor.depthAttachment.clearDepth = 0.0
//renderPassDescriptor.rasterizationRateMap = drawable.rasterizationRateMaps.first
if layerRenderer.configuration.layout == .layered {
renderPassDescriptor.renderTargetArrayLength = drawable.views.count
}
The rendering process is as follows:
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low.
On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS.
Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data?
We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
Despite being enrolled, I am utterly unable to locate any option to download 2.2 beta on my Vision Pro. All I see in the system update is that I'm uptodate with 2.1.
How do I locate beta download option?
thanks
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture.
The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values.
I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem.
My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values?
Project Link: Private
STEPS TO REPRODUCE
Run the app.
Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem).
Tap the button in the timeline to create a new project.
Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1).
There should now be a green 3d model added to the stage.
Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage).
Re-run the app to see that the green element is offset to a new location, not the last dragged location.
To reset and try again, delete the project canvas next to the project name (trash button) then restart the app.
Areas of concern:
RealityKitView is the only file you may need.
Line 119 is where we create new child entities
Lines 185-219 is where we persist and apply persisted values.
You can also search FIXME in the file to see areas of concern.
Tip:
I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window.
Is there anyway to reject that focus?
We would like to create an Immersive video and store the video file locally in Vision Pro for viewing.
By Immersive video, I mean the video that is played at the end of the Vision Pro experience at the Apple Store (LeBron's dunk, Curry's 3-point shot, tightrope walk, etc.). It is unclear if a way is currently provided to view Immersive video locally.
I can find some information about Spatial video on the Dev site, but I can't find any information about Immersive video. My understanding is:
Spatial video:
A video window appears in space and plays video with depth. Up to 4K side-by-side video can be converted to MV-HEVC format using Xcode and played back in the Photos app.
Immersive video:
180VR video, but I’m not sure how it was created. Similar to Spatial video, I converted a side-by-side 180VR video to MV-HEVC format using Xcode, but it could not be played back in the Photos app as expected.
Vision Pro's Photos app features an Immersive button during video playback, but this appears to be for zooming in on Spatial video to the full field of view, which seems different from Immersive video.
The demo video provided by Apple is streamed from Apple TV, and there are no local files available.
We are currently considering creating an app that displays different videos to each eye, but we prefer not to go this route due to licensing and distribution issues.
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
I am developing with Apple Vision Pro to implement object tracking functionality, but each model needs to go into Create ML for training, and the training time is very long. Are there other ways to shorten training time while obtaining reference files in the same format?
Additionally, can the delay in object tracking be further optimized? Although the refresh rate has been optimized, there is still a noticeable delay.
Hello,
Is there a way to always have the attachments of a RealityView always face the user?
For example, in a visionOS app, in an immersive space, we have an attachment. When the user either walks around the attachment, or rotates the parent entity, we would like the attachment to automatically rotate to face the user.
How do we do this?
I anticipated this to be a trivial feature to implement, since I thought I remembered seeing this feature as a built-in/opt-in option for attachments. But, I cannot find that feature.
All and any recommendations are appreciated, thanks.
From visionOS 2.0 we can access Apple Vision Pro's main camera but only for Enterprise account as it is enterprise API only, I have a normal Developer account and I want to use main camera and want to have a video call feature in app by using main camera of AVP, is it possible to do it using developer account only. Currently using that account I am not able to create entitlement certificate as there is no option.
Hi!
I read this page, about mirroring Vision Pro to another device.
Mirror your Apple Vision Pro to another device
I wanna know if it's possible mirroring Vision Pro to other Vision Pros with this way. (showing 2D mirroring screen like video play back or spatial video play back) Or are there other ways?
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
Hey there,
Just tried the visionOS 2.2 ultra-wide remote desktop feature and it's just a-m-a-z-i-n-g!!
I was curious if there's an API that we can use to setup our windows in a similar fashion? (curved + ultra wide).
Thank you!
Hi!
I'm planning to make visionOS multiplayer app for people in same space(a room). I wanna know that if it's possible to use TabletopKit, Group Activities to create an app that becomes multiplayer(synchronize) with the people who are using it as soon as the app is opened without using SharePlay.
Hi!
I wanna know if it's possible mirroring a Vision Pro to other Vision Pros.
If it's possible, how do I work on it? Can I get some hints?
My visionOS app uses an immersive view. If the app encounters an error, I want to present an alert.
I tried in a demo app to present such an alert, but it is not shown. Nearly the same code on iOS presents an alert window.
Here is my demo code, based on Apple's Immersive Environment App template:
import SwiftUI
import RealityKit
import RealityKitContent
struct ErrorInfo: LocalizedError, Equatable {
var errorDescription: String?
var failureReason: String?
}
struct ImmersiveView: View {
@State private var presentAlert = false
let error = ErrorInfo(
errorDescription: "My error",
failureReason: "No reason"
)
var body: some View {
RealityView { content, attachments in
let mesh = MeshResource.generateBox(width: 1.0, height: 0.05, depth: 1.0)
var material = UnlitMaterial()
material.color.tint = .red
let boardEntity = ModelEntity(mesh: mesh, materials: [material])
boardEntity.transform.translation = [0, 0, -3]
content.add(boardEntity)
}
update: { content, attachments in
// …
}
attachments: {
// …
}
.onAppear {
presentAlert = true
}
.alert(
isPresented: $presentAlert,
error: error,
actions: { error in
}, message: { error in
Text(error.failureReason!)
})
}
}
Since I cannot see any alert, is something wrong with my code? How should an alert be presented in immersive space?
I'm using hand tracking to detect collisions between fingertips and entities that I have placed in the scene. I'm using the .mixed environment.
However, I want to detect when a fingertip touches a real-world object such as a wall.
No matter what I try, I can't get the collision to fire. I'm using the SceneReconstructionProvider to give me world meshes, which I use to create ModelEntity objects to which I add a CollisionComponent with the shape of the object.
I can render the meshes just fine, but nothing I do seems to allow collisions to work.
Surely this is possible, what am I missing?
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator.
The solution we found was to:
Launch your app in the simulator
Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures.
Press and hold the Option key to display touch points (ie: the two-handed gesture points).
While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad.
If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object.
Context if you are interested:
Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link.
On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points."
This simply did not work anymore for rotation and scaling gestures.
These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues.
Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator:
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0
macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1)
macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0
completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1))
Throughout these troubleshooting I often:
restarted both xcode and sim
erased all derived data
erased all contents and settings from sims
performed fresh git clones
None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good.
Hopefully this will help other devs.
Article Link:
https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator
Gesture sample project link:
https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
We have discovered that our UIViewRepresentable view isn't being dismantled after its window is dismissed via dismissWindow().
This seems to result in a leak of our custom Coordinator class. Every time the user opens a new window, a new Coordinator is created; if the user then dismisses the window manually, or we dismiss it programmatically, the Coordinator remains in memory with no way to destroy it.
Is this expected behavior? How can we be sure to clean up our Coordinator when the view's window is closed? Thanks.
We developed an app for VisionOS 2.0 Beta in XCode 16 Beta. The development was done in the beta versions since we needed features for our app which were not available in VisionOS 1.0, which was the most recent stable release at the time we developed the app. The app was fully functional on our AVP running VisionOS 2.0 Beta Version 5, and we never had any errors. We did not publish the app to the app store since we are a research lab using the app for teleoperating a custom robot.
Last week, we upgraded the AVP from VisionOS 2.0 Beta Version 5 to VisionOS 2.0 (stable release). Unfortunately, once we upgraded to 2.0, we began to have an issue with the app. While the app is running, at seemingly random times, without any new functionality being used within the app (no new buttons being pressed, etc), we encounter the following console error:
assertion failure: 'index < m_size' (operator[]:line 1011) Index out of range. index = 18446744073709551615, size = 0
We could re-upload the app to the AVP and successfully operate the app for several minutes until the same error occurred again. We thought to use Apple Configurator to flash VisionOS 2.0 Beta 5 to the AVP since the error wasn't happening on the previous firmware, but we were unable to flash a beta version of VisionOS via Apple Configurator, so we simply performed a factory reset of the device (on VisionOS2.0, by pressing restore in Apple Configurator with the AVP connected via the developer strap) to see if this might fix the issue.
After doing a factory reset, we thought the console error completely went away. We were able to operate the app for ~3 hours on Sunday with no issues. Then, yesterday (Monday) we operated the app for another 2 hours, and at the very end of using the app, it crashed with the same error. We re-uploaded the app with XCode, and the error occurred again after about 20 mins of using the app. This cycle repeated, and every time we re-uploaded the app, the time it took for the error to occur decreased, until we uploaded the app and the error occurred in <20 seconds.
We decided to test our hypothesis by upgrading VisionOS to 2.1 and using XCode 16. Similarly, we were able to run the app on the AVP for 2 hours, then the error occurred. The next time we ran the app, the error occurred within 20 minutes, then after reloading, occurred within 5 mins, then 2 mins, etc.
We are pretty stumped on why the app would work after a factory reset or a firmware upgrade for hours, then fail faster and faster every time we re-upload the app from Xcode. We are not experienced in debugging Swift and ObjC, so we wanted to inquire if this is an issue that you have ran into before to point us in the right direction. We think that it could be a problem with the cached memory that persists on the device across uploads from Xcode, but that's the extent of our understanding.
P.S., we also experienced this error during some of the app failures, but the one above is the most common:
assertion failure: Index out of range (operatorl]:line
858) index = 576460752303423487, max = 1