Hi, Could anyone share some insights on how to get and track the 3D coordinates of real objects in the environment in playgrounds? I searched for some resources and noticed ARKit, Reality may be helpful but not sure how to do it.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
Hi,
Is there a way to create an AnchorEntity that is attached to the window / WindowGroup of a visionOS app, so that there would be a box that aligns with the window?
Thanks for your help!
Lately i got a lot of crashes on iOS 18 devices (mostly 13 Pro devices but also a 16 Pro Max), has anyone encountered similar issues and is there more information about offlineFloorPlanGeneration?
com.apple.RoomScanCore.offlineFloorPlanGeneration
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000f5cab18e03c0
RoomScanCore RSFrameFromDictionary + 110992
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/
??
For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality
Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer.
And I don't see any way to export/compile to a "reality file" from within Xcode.
How can I use multiple animations within a single GLTF file?
How can I set up multiple "tap target" on a single object, where each one triggers a different action?
How do we author something similar? What tools do we use?
Thanks
Hello! I’ve got a USDZ export from Maya pipeline working with animation, and they load up nicely in the Vision Pro.
I’ve been checking out the animated sample files in the Augmented Reality/Quick Loop sample page, specifically, the first three at the top of the page.
I would like to know how they are created. I’m a 3d modeler and animator, not a programmer, so dipping my toe in RCP and Xcode/SwiftUI, but could used some informative tutorials for proper workflow. For example, in the Lunar Rover sample, there are lines emanating from the model, then text windows appear. Would I need to create all these extras inside Reality Composer Pro? I’d like to start creating immersive, narrative experiences (both in a volume, and fully immersive) but for prototyping, I want to learn the proper way to add this type of functionality. I think I remember seeing something to do with “schemas” involved. I’m assuming there might be some coding to setup in RCP for when items are selected, then an associated animation is triggered. Can anyone point me towards the relevant documentation to help me get started? Remember, I don’t code. ;)
Here are my recent Vision Pro experimentations.
https://youtube.com/playlist?list=PLCH753rZ9r6eqXxpIemaSlcyYxjFgR210&si=P_7AY2aL97Upm61i
I’m also proficient with Unreal Engine, but getting content packaged and over to AVP is still not ready for prime time, so i’m exploring the native approach.
Thanks for helping point me in the right direction!
I just downloaded and opened the sample code for Creating 3D models as movable windows. (Link: https://developer.apple.com/documentation/visionos/creating-a-volumetric-window-in-visionos ).
I opened the main view in the simulator (canvas), placed the volumentric window somewhere and then moved the camera a bit.
Expected:
I would expect volumentric windows to stay in place when I place them somewhere and then move around or look in a different direction.
Actually:
They don't stay in place. They slightly move with the camera.
Question:
Is this actual behavior expected?
Is this just a thing with the simulator and will not happen with real hardware?
After creating a ". plist" file in the "Document" folder of the app, uninstalling and reinstalling the app, the "FileManager. default. fileExists (atPath: folderURL. path())" code returns true. I checked if the ". plist" file was not found in the "Document" folder of the app. Is this a bug in the VisionOS system
I have a class with an Entity, on which I added a Spatial Audio component. Furthermore, I have a function, which uses the playAudio() method to start the Spatial Audio. During the first call of the function, everything is fine. If I call the again, the audio volume drops abruptly after a half second. It is very quiet.
Approximately, I have following code:
class VoiceOutputPlayer: NSObject, ObservableObject, AVAudioPlayerDelegate {
private var speechEntity = Entity()
func play() {
Task{
let audioRessource = try await AudioFileResource(contentsOf: urlWave)
self.speechEntity.playAudio(audioRessource)
}
}
func initSpatialAudio() -> Entity {
speechEntity.transform.translation.y = -0.37
speechEntity.transform.translation.z = 0.09
speechEntity.spatialAudio = SpatialAudioComponent(gain: Double.zero)
speechEntity.spatialAudio?.reverbLevel = -2
speechEntity.spatialAudio?.directivity = .beam(focus: 0.9)
speechEntity.orientation = .init(angle: .pi, axis: [0, 1, 0])
speechEntity.spatialAudio?.distanceAttenuation = .rolloff(factor: 1)
return speechEntity
}
}
Have visionOS 2.2 on the Apple Vision Pro and use Xcode 16.1
I created an app where the user:
first scans a room using RoomCaptureView (RoomPlan)
then taps on physical elements (objects, walls...) using an ARView to record some 3d positions
I can handle taps in an ARView using a UITapGestureRecognizer and the ARView raycast(from:, allowing:, alignment:) method. This works fine, so I thought I could do the same using the ARView used by RoomCaptureView., so the user can scan a room and record some 3d positions at the same time. Sadly, this approach does not work, as the raycast method always returns nil.
What I actually need is mapping a tap on screen to a real-world position during RoomCaptureSession.
Does anyone know how to do this?
Copy of Feedback:
FB15969432
Improve window management with immersive spaces.
It is hard to manage windows from code when entering immersive space.
Look for instance at the sample:
https://developer.apple.com/documentation/visionos/displaying-a-3d-environment-through-a-portal
The window displayed before entering the virtual space stays there once the virtual space is entered : this window is too big but can't be resized by the program.
One could say this big window could be closed and a smaller window opened by the program with the "exit" button, but then this small window should be closed and the main window reopened when leaving te immersive space.
In the immersive space closing the "Exit" window with the X does not allow to leave the immersive space.
If the crown button is then used we go back to the Vision Pro main menu. If the app is chosen again we can see that it wasn't closed : the "Exit" window is now displayed but we are not in the immersive space!
Don't say "this is just a sample app", because all developers face those issues.
Please try to find the right solutions with your team to enhance this sample and share the right way to solve those issues. You could find that the specifications need to be enhanced.
You can also see that there is no way to exit a program from a program even if this is something that could be useful for some apps (you end a SharePlay game for instance)
Thank you very much for your time and consideration.
Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using:
guard let frame = arView.session.currentFrame,
let depthData = frame.sceneDepth?.depthMap else {
print("Depth data is unavailable.")
return
}
but this is the depth data after sensor fusion occurs and fails in low light conditions.
Hi, every one!
I'm trying to bind timeline(animation + audio) and behaviors on an entity in reality composer pro.
In xcode, I need to clone this entity and use the behavior, but I found that the behaviors are not clone(send notification but not received by reality composer pro and not execute the timeline).
How can I solve this problem? Thanks!
FYI.
The source code of the FindSurface demo app for Apple Vision Pro (visionOS) is available now.
The Swift package of FindSurface™ library is required to build the source code into the demo app.
https://github.com/CurvSurf/FindSurface-visionOS
After starting the app, the floating panels (below) will appear on your right side, and you will see wireframe meshes that approximately describe your environments. Performing a spatial tap (pinching with your thumb and index finger) with staring at a location on the meshes will invoke FindSurface, with an indicator (blue disk) appearing on the surface you've gazed.
Voice commands:
“Tap” – Spatial tap (gazing & pinching). Invoke FindSurface.
“Tap plane” – Plane selection.
“Tap sphere” or “Tap ball” – Sphere selection.
“Tap cylinder” – Cylinder selection.
“Tap cone” – Cone selection.
“Tap torus” or “Tap donut” – Torus selection.
“Tap accuracy” or “Tap measurement accuracy” – Accuracy selection.
“Tap mean distance”, “Tap average distance”, or “Tap distance” – Avg. Distance selection.
“Tap touch radius” or “Tap seed radius” – Touch Radius selection.
“Tap Inlier” – “Show inlier points” toggle.
“Tap outline” – “Show geometry outline” toggle.
“Tap clear” – “Clear Scene” click.
I found some snapshot API in developer documents, like blows:
RealityKit / Views and attachments / ARView / /snapshot(saveToHDR:completion:)
SceneKit / SCNView / snapshot()
Is there a similar API in visionOS?and if not, how can I implement snapshot for realityview and usdz?
I am currently developing a game that runs on VisionOS using RealityKit and Swift.
I have a question regarding particle emitters.
It seems that there is a sorting order (render queue) between particle emitters themselves, but there doesn’t appear to be a render queue between particle emitters and regular model entities.
If such a feature exists, could you please provide a simple example?
Thank you!
Hi, in visionOS2, I wonder if there is a method to add/remove referenceObjects of ObjectTrackingProvider after ARSession started without stop it.
Now I must stop it and run it again. The current tracking is stopped.
Thanks.
I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players.
Current Approach and Limitations
I have tried setting EntityEquipment's state to DieState and treating it as a TossableRepresentation object. This approach achieves basic physical properties synchronized across players. However, it has several limitations:
No Collision Detection Between Dice: Multiple dice do not collide with each other.
Shape Limitations: Custom shapes, like parallelepipeds, cannot be configured.
Below is my existing code for Base Entity Equipment without physical properties:
struct CubeWithPhysics: EntityEquipment {
let id: ID
let entity: Entity
var initialState: BaseEquipmentState
init(id: ID, entity: Entity) {
self.id = id
self.entity = entity
initialState = .init(parentID: .tableID, pose: .init(position: .zero, rotation: .zero), entity: self.entity)
}
}
I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
Hello, I am creating a VisionOS application where a simple web browser is implemented in an immersive view by using WKWebView. Presentations cause a crash with "Presentations are not permitted within volumetric window scenes."
I want to open a view in my App that contains a Model3D view and I want that object to rotate continuously around the Y axis while it is visible. Is it possible to animate the rotation of a Model3D view?
I've tried this code, but the object just sits there and doesn't rotate.
import RealityKit
import RealityKitContent
import SwiftUI
struct QuantumComputerArea: View {
@State var degreesRotating = 0.0
var body: some View {
VStack {
Model3D(named: "quantumComputer") { phase in
switch phase {
case .empty:
ProgressView()
case let .failure(error):
Text(error.localizedDescription)
case let .success(model):
model
.resizable()
.scaledToFit()
.offset(x: -75, y: 0)
.rotation3DEffect(.degrees(degreesRotating), axis: (x: 0, y: 1, z: 0))
@unknown default:
fatalError()
} //phase
} //Model3D
.onAppear {
withAnimation(Animation.linear(duration: 10).repeatForever(autoreverses: false)) {
degreesRotating = 360
}
}
} //VStack
} //View
} //View
I'm probably missing something simple but if anyone has any suggestions (including use a RealityView) I'd be grateful for the advice.
Hello, i've recently received the entitlements to access the main camera stream for a project on the Apple Vision Pro.
What happens :
When executing code from this WWDC tutorial , i'm getting this error when trying to use a Camera Frame Provider :
ar_camera_frame_provider_t <0x300d58870>: Failed to start camera stream with error: <ar_error_t: 0x303fcc4c0 Error Domain=com.apple.arkit Code=100 "App not authorized." UserInfo={NSLocalizedFailureReason=Using camera frame provider requires an entitlement., NSLocalizedRecoverySuggestion=, NSLocalizedDescription=App not authorized.}
What I've tried :
I followed the instructions given by mail, by :
adding the .license file at the root of my project,
adding the .entitlements file by adding capabilities in the project (Main Camera Access & Passthrough in screen capture are there).
I've added NSCameraDescription, NSEnterpriseMCAMUsageDescription and NSWorldSensingUsageDescription (they all have a value assigned).
I've also followed those post & post advices.
When checking on the Account settings, i do see the capabilities in the "additional capabilities"
On first launch, I'm also getting prompted to accept the NSEnterpriseMCAMUsageDescription, so I assume the info.plist file is valid?
What did i missed to get the entitlements working ?
Here's the code :
import ARKit
import SwiftUI
import Vision
import RealityKit
class MainCameraAccess {
var arKitSession = ARKitSession()
var cameraFrameProvider = CameraFrameProvider()
var pixelBuffer: CVPixelBuffer?
func startCameraSession() async {
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left])
// Request authorization
await arKitSession.requestAuthorization(for: [.cameraAccess])
// Start the session
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
print("Failed to start ARKit session: \(error)")
return
}
// Get camera frame updates
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
// Process frames
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
self.pixelBuffer = mainCameraSample.pixelBuffer
}
}
func saveLatestImage() {
guard let pixelBuffer = self.pixelBuffer else {
print("No image available to save.")
return
}
// Convert CVPixelBuffer to UIImage
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else {
print("Failed to create CGImage.")
return
}
let uiImage = UIImage(cgImage: cgImage)
// Save UIImage to Photos Album
UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil)
print("Image saved to photo library.")
}
}
Thanks in advance for the help,
Jeremy