With this sample code here:
import SwiftUI
struct ContentView: View {
var body: some View {
Text("Hello world")
.hoverEffect(isEnabled: true)
}
}
private extension View {
func hoverEffect(isEnabled: Bool) -> some View {
if #available(iOS 17.0, *) {
// VisionOS 2.0 goes in here?
return self
.hoverEffect(.automatic, isEnabled: isEnabled)
} else {
return self
}
}
}
You would expect if the destination was visionOS it would go into the else block but it doesn't. That seems incorrect since the condition should be true if the platform is iOS 17.0+.
Also, I had this similar code that was distriubted via a xcframework and when that view is used in an app that is using the xcframework while running against visionOS there would be a runtime crash (EXC_BAD_ACCESS). The crash could only be reproduced when using that view from the xcframework and not the local source code. The problem was fixed by adding visionOS 1.0 to that availability check. But this shouldn't have been a crash in the first place.
Does anyone have thoughts on this or possibly an explanation?
Thank you!
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have been experimenting some experiences in which I would like to use SharePlay to allow the app to be used by multiple users.
Currently I achieved sharing a volume containing a Reality Composer Pro scene inside of it, the scene contains some entities with an animation.
So far I have been able to correctly share the volume and its content, with the animation playing without problems, but once I activate SharePlay different users see different moments of the animation if no animation at all.
Is there a way to synchronize animations between all the users, no matter when someone entered the SharePlay session, aside from communicating the animation time once someone joins?
There is a flickering and slight dimming occurring specifically on skysphere, at initial load of the scene, when using Attachment. This is observed in the simulator and on the real device.
Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it.
To replicate the issue, follow these steps:
Create a new visionOS app using Xcode template, see image.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image.
Replace all swift files with those you will find in the attached texts.
Add the skysphere image asset Skydome_8k found at this Apple Sample App Presenting an artist’s scene.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Continuously open and dismiss the skysphere by pressing on buttons Open Skysphere and Close.
Observe the skysphere flicker and dim upon display of the skysphere.
The current workaround is commented in file ThreeSixtySkysphereRealityView at lines 65, 70, 71, and 72. Uncomment these lines, and the flickering and dimming do not occur.
Are we using attachments wrongly?
Is this behavior known and documented?
Or, is there really a bug in visionOS?
AppModel
InitialImmersiveView
MainImmersiveView
TestSkysphereAttachmentFlickerApp
ThreeSixtySkysphereRealityView
When I first install and run the app, it requests authorization for hand tracking data. But then if I go to the settings and disable hand tracking from the app, it no longer requests. The output of requestAuthorization(for:) method just says [handTracking : denied]
Any idea why the push request only shows up once then never again?
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images.
Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures.
Our specific use case involves:
1. Accessing the raw passthrough camera feed.
2. Capturing high-resolution images of objects in the real world through the camera.
3. Processing and saving these images for further analysis within our custom enterprise app.
We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
I am new to learning about concurrency and I am working on an app that uses the HandTrackingProvider class.
In the Happy Beam sample code, there is a HearGestureModel which has a reference to the HandTrackingProvider() and this seems to write to a struct called HandUpdates inside the HeartGestureModel class through the publishHandTrackingUpdates() function. On another thread, there is a function called computeTransformofUserPerformedHeartGesture() which reads the values of the HandUpdates to determine whether the user is making the appropriate gesture.
My question is, how is the code handling the constant read and write to the HandUpdates struct?
In WWDC24, visionOS hand tracking has a new function that can make an entity track the hand faster (but at the expense of a certain degree of accuracy), and the video only explains how to implement ARKit, so please ask how to implement the anchorEntiy in the reality view.
In visionOS, the virtual content is covered by the hand by default, so I want to know that in the hybrid space, if the distance of an entity is behind a real object, how can the object in the room be covered like the virtual content is covered by the hand?
There is a flickering occurring on 3D assets when switching immersive spaces, which is not the nicest user experience. The flickering does occur either when loading the scenes directly from the RealityKitContent package, or from memory (pre-loaded assets).
Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it.
To replicate the issue, follow these steps:
Create a new visionOS app using Xcode template, see image.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image.
Replace all swift files with those you will find in the attached texts.
In the RealityKitContent package, create a scene named YellowSpheres as illustrated below.
In the RealityKitContent package, create a scene named RedSpheres as illustrated below.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Continuously switch immersive spaces by pressing on buttons Show RedSpheres and Show YellowSpheres.
Observe the 3d assets flicker upon opening of the immersive spaces.
AppModel
RedSpheresImmersiveView
YellowSpheresImmersiveView
TestFlickeringBetweenImmersiveSpacesApp
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing.
ImmersiveView's two scenes:
Scene 1 has 1 toggle button
Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video
Attached are the files and images of the memory leak as captured in Xcode.
To replicate this memory leak, follow these steps:
Create a new visionOS app using Xcode template as illustrated below.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist.
Replace all swift files with those you will find in the attached texts.
In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph
Press on the first immersive space's button "Open ImmersiveView"
Press on the second immersive space's button "Show Immersive Video"
Background the app
When the app tray appears, foreground the app by selecting it
The first immersive space should appear
Repeat steps 7, 8, 9, and 10 multiple times
Observe the memory use going up, the graph should look similar to the below illustration.
In ImmersiveView, upon backgrounding the app, I do:
a reset method to clear the video's memory
dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..)
Am I not releasing the memory correctly?
Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app?
App file TestVideoLeakOneImmersiveView
First ImmersiveSpace file InitialImmersiveView
Second ImmersiveSpace File ImmersiveView
Skysphere Model File Immersive180VideoViewModel
File AppModel
When using the plane PlaneDetectionProvider in visionOS I seem to have hit a limitation which is that regardless of where the headset is in the space, planes will only be detected that are (as far as I can tell) less that 5m from the world origin. Mapping a room becomes very tricky as a result because you often find some walls are outside the radius, even if you're standing two feet away from a ten foot wall. It just won't see it. I've picked my way through the documentation but I cannot see any way to extend this distance. Am I missing something?
Hi, I have a video player app that lost its audio spatialization since the VisionOS 2 update. I am using the VideoPlayerComponent (https://developer.apple.com/documentation/realitykit/videoplayercomponent), to implement my videos as entities, as I want a custom look and controls to my player.
In VisionOS 1, there was automatic audio spatialization. Depending where my video entity is, the app automatically enables head tracking audio spatialization. Since VisionOS 2 however, I cannot get my video entities to play Spatial Audio. I've looked into DestinationVideo and even set up AVAudioSessionSpatialExperience but Spatial Audio is still not working.
Appreciate any help. Thanks.
I'm setting:
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
In UnityVisionOSSettings.swift before build out in Xcode.
I'm having an issue where this only works on occasion. Seems random. I'll either get no immersion level available (crown dial is greyed out and no changes can be made) or it will only allow 0.5 - 1.0 immersion (dial will go below 0.5 but springs back to 0.5 when released).
With no changes to my setup or how I'm setting immersionStyle I've been able to get this to work as I would expect. Wondering if there is some bug that would be causing this to fail. I've tested a simple NativeSDK progressive immersion style with same code for custom setting and it works everytime, so it's something related to Unity.
Here is the entire UnityVisionOSSettings that, from as far as I can tell, are controlling this:
`// GENERATED BY BUILD
import Foundation
import SwiftUI
import PolySpatialRealityKit
import UnityFramework
let unityStartInBatchMode = false
extension UnityPolySpatialApp {
func initialWindowName() -> String { return "Unbounded" }
func getAllAvailableWindows() -> [String] { return ["Bounded-0.500x0.500x0.500", "Unbounded"] }
func getAvailableWindowsForMatch() -> [simd_float3] { return [] }
func displayProviderParameters() -> DisplayProviderParameters { return .init(
framebufferWidth: 1830,
framebufferHeight: 1600,
leftEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
rightEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
leftProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1),
rightProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1)
)
}
@SceneBuilder
var mainScenePart0: some Scene {
ImmersiveSpace(id: "Unbounded", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(1.000, 1.000, 1.000), maxSize: .init(1.000, 1.000, 1.000))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Unbounded", .init(1.000, 1.000, 1.000)))
.onImmersionChange() { oldContext, newContext in
PolySpatialWindowManagerAccess.onImmersionChange(oldContext.amount, newContext.amount)
}
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .upperLimbVisibility(.automatic)
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
WindowGroup(id: "Bounded-0.500x0.500x0.500", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(0.100, 0.100, 0.100), maxSize: .init(0.500, 0.500, 0.500))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Bounded-0.500x0.500x0.500", .init(0.500, 0.500, 0.500)))
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .windowStyle(.volumetric).defaultSize(width: 0.500, height: 0.500, depth: 0.500, in: .meters).windowResizability(.contentSize) .upperLimbVisibility(.automatic) .volumeWorldAlignment(.gravityAligned)
}
@SceneBuilder
var mainScene: some Scene {
mainScenePart0
}
struct LifeCycleHandlerModifier: ViewModifier {
func body(content: Content) -> some View {
content
.onOpenURL(perform: { url in
UnityLibrary.instance?.setAbsoluteUrl(url.absoluteString)
})
}
}
}`
Hello. I have a spatial painting application for AR/VR, and the new system built into VisionOS2 for bringing up the navigation menu constantly interferes with using hand gestures for painting. How can I deactivate the system within the application? It's really a big issue.
Last month we published an AVP app: https://apps.apple.com/us/app/visionpainter/id6505053450. Everything is correct on the App Store Connect page, but I can only access it by searching for it in the store. It hasn't appeared in 'News' or within its category. Is any special action required to make it visible?
Hi,
My app allows users to share and view spatial photos.
For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs.
For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend.
However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app:
Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro.
Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11.
Google drive link here:
https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa
I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns.
Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo.
Happy to submit a code-level support request if more information is needed.
// the data is from photos picker item
let data = try await photo.loadTransferable(type: Data.self)
let source = CGImageSourceCreateWithData(data as CFData, nil)
let sbsImage = source.extractSpatialPhoto()
extension CGImageSource {
func extractSpatialPhoto() -> UIImage? {
guard let leftCGImage = extractSpatialImage(at: 0),
let rightCGImage = extractSpatialImage(at: 1)
else {
return nil
}
let leftImage = UIImage(ciImage: leftCGImage)
let rightImage = UIImage(ciImage: rightCGImage)
guard leftImage.size == rightImage.size else {
return nil
}
// merge left + right
let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height)
UIGraphicsBeginImageContextWithOptions(size, true, 1.0)
leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height))
rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height))
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return mergedImage
}
// not sure if this actually works
func extractSpatialImage(at index: Int) -> CIImage? {
guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else {
return nil
}
var ciImage = CIImage(cgImage: cgImage)
if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any],
let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any],
let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any],
let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double]
{
// Default baseline is 64mm (0 for left camera, 0.064m for right camera)
let standardBaseline = 0.064
// Check if it's the right image (should be at [0.064, 0, 0])
let isRightImage = (index == 1)
let expectedPosition = isRightImage ? standardBaseline : 0.0
// Calculate the translation needed to align to standard baseline
let positionDelta = position[0] - expectedPosition
// Apply translation only if there's a mismatch in position
if positionDelta != 0 {
let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0)
ciImage = ciImage.transformed(by: transform)
}
}
return ciImage
}
}
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
A timeline in RCP will post a notification "Identifier: Completed" when it finishes playing
I am trying to receive this in the following way:
extension Notification.Name {
static let notifyOnAnimationCompleted = Notification.Name("Completed")
}
// in the view
private let AnimationCompleted = NotificationCenter.default.publisher(for: .notifyOnAnimationCompleted)
RealityView {...}
.onReceive(AnimationCompleted)
{ _ in
print("End")
}
This was once working back to July, but it never prints "End" for now
Hello,
I'm currently working on a project that requires real-world object recognition and labeling. I understand that due to the security and privacy issues, we are unable to access the vision pro camera feed. Is there any other external way to solve this problem?
Thank you!