In iOS, to display a RealityView, you can assign a value to the content.camera property:
content.camera = .virtual
However, how can this be implemented in macOS and tvOS?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
The set up:
I am developing a visionOS app that uses an immersive space.
The user sees a board with entities put onto it. My app places the board in front of the default camera and entities with a certain position and orientation relative to the board. Placement and rotation should be animated.
The problem:
If I place the entities by assigning a Transform to the transform property of the entity directly, i.e. without animation, the result is correct.
However I have to use the entity's move(to: function to animate it. And move(to: works in an unexpected way.
I thus wrote a little test app, based on Apple's visionOS immersive app template (below). There, the following 5 cases are treated:
Set transform directly (without animation). This gives the correct result, and works as expected (without animation).
Set transform using move relative to world (without animation). This gives the correct result, although it does not work as expected. I expected "relative to world" means translation and rotation is relativ to world. This seems wrong for translation and right for rotation.
Set transform using move relative to parentEntity (without animation). This gives a wrong result, although translation and rotation are defined relative to the parentEntity.
Set transform using move relative to world with animation. This gives also a wrong result, and without animation.
Set transform using move relative to parentEntity with animation. This gives also a wrong result, and without animation.
Here are the screen shots for the cases 1...5:
Cases 1 & 2
Case 3
Cases 4 & 5
The question:
So, obviously, I don't understand what move(to: does. I would be happy to get any advice what is wrong and how to do it right.
Here is the code:
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
let boardHeight: Float = 0.1
let boxHeight: Float = 0.3
var body: some View {
RealityView { content in
let boardEntity = makeBoard()
content.add(boardEntity)
let boxEntity = makeBox(parentEntity: boardEntity)
boardEntity.addChild(boxEntity)
}
}
func makeBoard() -> ModelEntity {
let mesh = MeshResource.generateBox(width: 1.0, height: boardHeight, depth: 1.0)
var material = UnlitMaterial(); material.color.tint = .red
let boardEntity = ModelEntity(mesh: mesh, materials: [material])
boardEntity.transform.translation = [0, 0, -3]
return boardEntity
}
func makeBox(parentEntity: Entity) -> ModelEntity {
let mesh = MeshResource.generateBox(width: 0.3, height: boxHeight, depth: 0.3)
var material = UnlitMaterial(); material.color.tint = .green
let boxEntity = ModelEntity(mesh: mesh, materials: [material])
// Set position and orientation of the box
// To put the box onto the board, move it up by half height of the board and half height of the box
let y_up = boardHeight/2.0 + boxHeight/2.0
let translation = SIMD3<Float>(0, y_up, 0)
// Turn the box by 45 degrees around the y axis
let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0))
let transform = Transform(rotation: rotationY, translation: translation)
// Do the actual move
// 1) Set transform directly (without animation)
boxEntity.transform = transform // Translation and rotation correct
// 2) Set transform using move relative to world (without animation)
// boxEntity.move(to: transform, relativeTo: nil) // Translation and rotation correct
// 3) Set transform using move relative to parentEntity (without animation)
// boxEntity.move(to: transform, relativeTo: parentEntity) // Translation incorrect, rotation correct
// 4) Set transform using move relative to world with animation
// boxEntity.move(to: transform,
// relativeTo: nil,
// duration: 1.0,
// timingFunction: .linear) // Translation incorrect, rotation incorrect, no animation
// 5) Set transform using move relative to parentEntity with animation
// boxEntity.move(to: transform,
// relativeTo: parentEntity,
// duration: 1.0,
// timingFunction: .linear) // 5) Translation incorrect, rotation incorrect, no animation
return boxEntity
}
}
In my visionOS app I am attempting to get the location of a finger press (not a tap, but when the user first presses their fingers together). As far as I can tell, the only way to get this event is to use a SpatialEventGesture.
I've currently got a DragGesture and I am able to use the convert functions in the passed in EntityTargetValue to convert the location3D from the DragEvent to my hit tested entity. But as far as I can tell the SpatialEventGesture doesn't use an EntityTargetValue. I've tried using the convert functions in my targeted entity (ie, myEntity.convert(position: from:)) but these do not return valid values.
My questions are:
Is SpatialEventGesture the correct way to get notified of finger presses?
How do I convert the location3D in the SpatialEventGesture to my entity space?
Hello,
Would it be possible to use any of the available visionOS environments when I use an app that requires me to be in an immersive space? I'm developing an app where users can start the immersive space experience by pressing a button. In my case, it would be helpful if the user could still choose a visionOS environment using the Digital Crown, but currently, it seems to be unavailable after opening an immersive space.
Thank you very much in advance!
When opening the Game Center dashboard via the Access Point, the Game Center dashboard appears BEHIND any content in the window with z depth (default type not volumetric). It obscures the dashboard and this makes it unusable.
Alerts have the same placement.
The new defaultWindowPlacement would probably suffice, but I don't think there's a way to apply that to the Game Center window.
What to do?
Thanks.
Hi. I recently added SwiftUI context menus and picker menus to my app, but when they are activated they flicker rapidly, and it is impossible to select anything (there is no hover effect either). When these menus are activated, the console prints lots of warning messages similar to this:
[NetworkComponent] Cannot find component's entity (guid=14395713952467043328, typeID=295756909031380028, type=CustomComponentRCPInputTargetComponent, entity=0x1047c6750).
This issue doesn't seem to happen on visionOS 1.2 simulator, but is reliably reproducible on visionOS 2.0 simulator and device.
Any idea what this might be related to? I am attempting to narrow down on the issue but it's challenging to do so without knowing what the error is about. Thanks!
In Xcode 16 beta 6, we want to start the app with an Alert advising the user that they are about to enter an immersive space.
To achieve this, I use an empty VStack (lets name it View1) with an alert modifier. Then, in the alert’s OK button action, we have the statement openWindow(id: "ContentView”). View1 is in the first WindowGroup in the App file.
When pressing OK, the Alert and View1 dismiss themselves, then ContentView displays itself shifted vertically towards the top. ContentView is in a secondary WindowGroup. We should expect ContentView to display itself front and center to the user as every other window.
What is wrong my code? Or, is there a bug in visionOS?
Attached are images of my code, and a video illustrating the bad behavior.
Does anyone have a fix for this or is this a bug? I just updated to Xcode 16 and simulator 2.0 yesterday and running my app previously on 1.2 took just a few seconds to load. With 2.0 it takes several minutes. Even if I launch any of the small Developer apps available in the Vision samples section they all take at least 5 minutes to run.
How do I fix this? If I open in device it still launches in less than 10 seconds but that's not always convenient for me.
MacBook Pro, M3, 18G, Sonoma 14.6.1
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me.
for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") }
var body: some View {
VStack {
image
.resizable()
.scaledToFit()
if(self.finalImage != nil){
self.finalImage!
.resizable()
.scaledToFit()
}else{
image
.resizable()
.scaledToFit()
}
}
.task {
if #available(visionOS 2.0, *) {
guard CameraFrameProvider.isSupported else {
print("CameraFrameProvider not supported.")
return
}
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left])
let cameraFrameProvider = CameraFrameProvider()
do {
try await arkitSession.run([cameraFrameProvider])
} catch {
guard let sessionError = error as? ARKitSession.Error else {
preconditionFailure("ARKitSession.run() returned a non-session error: \(error)")
print("ARKitSession.run() returned a non-session error: \(error)")
}
}
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
preconditionFailure("Failed to get an async sequence for the first format.")
print("Failed to get an async sequence for the first format.")
}
print("cameraFrameUpdates:: \(cameraFrameUpdates)")
for await cameraFrame in cameraFrameUpdates {
print("cameraFrame:: \(cameraFrame)")
print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))")
guard let leftSample = cameraFrame.sample(for: .left) else {
print("CameraFrameProviderSample - Nil camera frame left sample")
print("CameraFrameProviderSample - Nil camera frame left sample")
continue
}
self.pixelBuffer = leftSample.pixelBuffer
print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========")
self.finalImage = self.setImage()
}
} else {
// Fallback on earlier versions
}
}
}
When I load some usdz file , it crash 100%, Why ?
It crash in simulate , but not crash in Vision Pro
-[MTLDebugDevice newBufferWithBytes:length:options:]:723: failed assertion `Buffer Validation
newBufferWith*:length 0x100fff80 must not exceed 256 MB.
Hi! Now I am making a visionOS program. I have an idea that I want to embed spatial videos or pictures into my UI, but now I have encountered problems and have no way to implement my idea. I have tried the following work:
Use AVPlayerViewController to play a spatial video, but it is only display spatial video when modalPresentationStyle =.fullscreen. Once embedded in swiftUI's view, it shows it as a normal 2D image.
The method of https://developer.apple.com/forums/thread/733813 I also tried, using a shadergraph to realize the function of the spatial images displaying, but the material can only be attached on the entity, I don't know how to make it show up in view.
I also tried to use CAMetalLayer to implement this function and write a custom shader to display spatial images, but I couldn't find a function like unity_StereoEyeIndex in unity to render binocular switching.
Does anyone have a good solution to my problem? Thank you!
I’m encountering a 1-meter size limit on the visual presentation of objects presented in an immersive environment in vision os, both in the simulator and in the device
For example, if I load a USDZ object that’s 1.0x0.5x0.05 meters, all of the 1.0x0.5 meter side is visible.
If I scale it by a factor of 2.0, only a 1.0x1.0 viewport onto the object is shown, even though the object size reads out as scaled when queried by
usdz.visualBounds(relativeTo: nil).extents
and if the USDZ is animated the animation, the animation reflects the motion of the entire object
I haven’t been able to determine why this is the case, nor any way to adjust/mitigate it.
Is this a wired constraint of the system or is there a workaround.
Target environment is visionos 1.2
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame?
Button("Camera Feed") {
Task{
if #available(visionOS 2.0, *) {
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
return
}
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
//====
print("=========================")
print(mainCameraSample.pixelBuffer)
print("=========================")
// self.pixelBuffer = mainCameraSample.pixelBuffer
}
} else {
// Fallback on earlier versions
}
}
}
I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
Hi,
I have a RealityKit app that I am building with Xcode 16. The app has a minimum deployment target of iOS 17. If I run it on an iOS 17 device the app crashes:
dyld[15716]: Symbol not found: _$s10RealityKit13ShapeResourceC14generateConvex4fromAcA04MeshD0C_tYaKFZ
Referenced from: …
Expected in: …/System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
My code looks something like this:
@available(iOS, introduced: 13.0, obsoleted: 18.0)
@MainActor @preconcurrency func generateNonAsyncConvexShapeResource(from meshResource: MeshResource) throws -> ShapeResource {
ShapeResource.generateConvex(from: meshResource)
}
@available(iOS 18.0, *)
func generateConvexShapeAsync(from meshResource: MeshResource) async throws -> ShapeResource {
// This will only be available for iOS 18 and above
return try await ShapeResource.generateConvex(from: meshResource)
}
if let meshResource = try? modelEntity.model?.mesh.applying(transform: transform.matrix) {
if #available(visionOS 1.0, iOS 18.0, *) {
try? await generateConvexShapeAsync(from: meshResource)// await shapeResources.append(.generateConvex(from: meshResource))
} else {
try? generateNonAsyncConvexShapeResource(from: meshResource)
}
}
So I actually do check for the system and only call the async variant on iOS 18.
Any hints how to fix that?
Thanks!
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor.
I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor.
Do I need to enable something to have my character cast a show on the real-world floor?
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318.
Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map).
Is this the standard UV mapping for half a SkyDome?
It has caused some issues when I've applied some HDRIs.
I am working on a small side project for the Apple Vision pro. One thing I'm trying to figure out is can I open another app while having the immersive space open from my original app? As an example I want to present a fully immersed view displaying a 360 degree photo. I then want to allow the user to open up safari or any other app of their choice and use the immersive environment as a background? Is this possible? Everything I've read so far seems to say no but I wasn't sure if someone found out how to make this possible.
I have searched everywhere for examples to replicate this awesome feature. Specifically I am talking about the tab overview in safari. How they achieve the tilt/angle of the windows/views towards the user and how they are placed. Is this a volumetric window? some sort of spatial lazy grid?
anyone knows how they achieved this?
Thanks
Hi everyone,
I'm currently developing an app for Vision Pro using SwiftUI, and I've encountered an issue when testing on the Vision Pro device. The app works perfectly fine on the Vision Pro simulator in Xcode, but when I run it on the actual device, it gets stuck on the loading screen. The logo appears and pulsates when it loads, as expected, but it never progresses beyond that point.
Issue Details:
The app doesn't crash, and I don't see any major errors in the console. However, in the debug logs, I encounter an exception:
Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!"
I’ve searched through my project, but there’s no direct reference to a selector named plane. I suspect it may be related to a framework or system call failing on the device.
There’s also this warning:
NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed.
What I’ve Tried:
Verified that all assets and resources are properly bundled and loading (since simulators tend to be more forgiving with file paths).
Tested the app with minimal UI to isolate potential causes, but the issue persists.
Checked the app's Info.plist configuration to ensure it’s properly set up for Vision Pro.
No crashes, just a loading screen hang on the device, while the app works fine in the Vision Pro simulator.
Additional Info:
The app’s UI consists of a loading animation (pulsating logo) before transitioning to the main content.
Using Xcode 16.1 Beta, VisionOS SDK.
The app is based on SwiftUI, with Vision Pro optimizations for immersive experience.
Has anyone experienced something similar when moving from the simulator to the Vision Pro hardware? Any help or guidance would be appreciated, especially with regards to the exception or potential resource loading issues specific to the device.
Thanks in advance!
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it.
This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma.
Feedback ID: FB15081053