Background: This is a simple visionOS empty application. After the app launches, the user can enter an ImmersiveSpace by clicking a button. Another button loads a 33.9 MB USDZ model, and a final button exits the ImmersiveSpace.
Below is the memory usage scenario for this application:
After the app initializes, the memory usage is 56.8 MB.
After entering the empty ImmersiveSpace, the memory usage increases to 64.1 MB.
After loading a 33.9 MB USDZ model, the memory usage reaches 92.2 MB.
After exiting the ImmersiveSpace, the memory usage slightly decreases to 90.4 MB.
Question: While using a memory analysis tool, I noticed that the model's resources are not released after exiting the ImmersiveSpace. How should I address this issue?
struct EmptDemoApp: App {
@State private var appModel = AppModel()
var body: some Scene {
WindowGroup {
ContentView()
.environment(appModel)
}
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
.environment(appModel)
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
}
}
struct ContentView: View {
@Environment(AppModel.self) private var appVM
var body: some View {
HStack {
VStack {
ToggleImmersiveSpaceButton()
}
if appVM.immersiveSpaceState == .open {
Button {
Task {
if let url = Bundle.main.url(forResource: "Robot", withExtension: "usdz") {
if let model = try? await ModelEntity(contentsOf: url, withName: "Robot") {
model.setPosition(.init(x: .random(in: 0...1.0), y: .random(in: 1.0...1.6), z: -1), relativeTo: nil)
appVM.root?.add(model)
print("Robot: \(Unmanaged.passUnretained(model).toOpaque())")
}
}
}
} label: {
Text("Add A Robot")
}
}
}
.padding()
}
}
struct ImmersiveView: View {
@Environment(AppModel.self) private var appVM
var body: some View {
RealityView { content in
appVM.root = content
}
}
}
struct ToggleImmersiveSpaceButton: View {
@Environment(AppModel.self) private var appModel
@Environment(\.dismissImmersiveSpace) private var dismissImmersiveSpace
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
var body: some View {
Button {
Task { @MainActor in
switch appModel.immersiveSpaceState {
case .open:
appModel.immersiveSpaceState = .inTransition
appModel.root = nil
await dismissImmersiveSpace()
case .closed:
appModel.immersiveSpaceState = .inTransition
switch await openImmersiveSpace(id: appModel.immersiveSpaceID) {
case .opened:
break
case .userCancelled, .error:
fallthrough
@unknown default:
appModel.immersiveSpaceState = .closed
}
case .inTransition:
break
}
}
} label: {
Text(appModel.immersiveSpaceState == .open ? "Hide Immersive Space" : "Show Immersive Space")
}
.disabled(appModel.immersiveSpaceState == .inTransition)
.animation(.none, value: 0)
.fontWeight(.semibold)
}
}
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
The RoomPlan API makes it possible to serialize and de-serialize CapturedRoom objects. This opens up the possibility to modify a CapturedRoom (e.g. deleting surfaces/objects) in a de-serialized state and serialize it as a new CapturedRoom. All modified attributes are loaded accordingly, so far so good.
My problem starts with the StructureBuilder and it's merge function capturedStructure().
This function ignores any modifications to attributes of a CapturedRoom. The only data that is considered is encoded in the CoreModel attribute (which is not mentioned in the official documentation).
If someone has more information or a working solution about how to modify CapturedRooms please let me know.
Additionally if there is somewhere a documentation about the CoreModel-attribute please post a link here.
I have a quiet big USDZ file which have my 3d model that I run on Realityview Swift Project and it takes sometime before I can see the model on the screen, So I was wondering if there is a way to know how much time left for the RealityKit/RealityView Model to be loaded or a percentage that I can add on a progress bar to show for the user how much time left before he can see the full model on screen. and if there is a way how to do this on progress bar while loading.
Something like that
I have read the Converting side-by-side 3D video to multi-view HEVC and spatial video, now I want to convert back to side-by-side 3D video. On iPhone 15 Pro MAX, the converting time is about 1:1 as the original video length.
I do almost the same as the article mentioned above, the only difference is I get the frames from Spatial video, merging into Side-by-side. Currently my code merging the frame wrote as below. Is any suggestion to speed up the process? Or in the official article, is there anything that we can do to speed up the conversion?
// Merge frame
let leftCI = resizeCVPixelBufferFill(bufferLeft, targetSize: targetSize)
let rightCI = resizeCVPixelBufferFill(bufferRight, targetSize: targetSize)
let lbuffer = convertCIImageToCVPixelBuffer(leftCI!)!
let rbuffer = convertCIImageToCVPixelBuffer(rightCI!)!
pixelBuffer = mergeFrames(lbuffer, rbuffer)
Hello!
I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on.
Any tips on implementation? Thanks!
import RealityKit
import RealityKitContent
struct ModelView: View {
var isPlaying: Bool
@State private var scene: Entity? = nil
@State private var unboxAnimationResource: AnimationResource? = nil
var body: some View {
RealityView { content in
// Specify the name of the Entity you want
scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle)
scene!.generateCollisionShapes(recursive: true)
scene!.components.set(InputTargetComponent())
content.add(scene!)
} .installGestures()
.onChange(of: isPlaying) {
if (isPlaying){
var playerDefinition = scene!.availableAnimations[0].definition
playerDefinition.speed = 1
playerDefinition.repeatMode = .none
playerDefinition.trimDuration = 0
let playerAnimation = try! AnimationResource.generate(with: playerDefinition)
scene!.playAnimation(playerAnimation)
} else {
var playerDefinition = scene!.availableAnimations[0].definition
playerDefinition.speed = -1
playerDefinition.repeatMode = .none
playerDefinition.trimDuration = 0
let playerAnimation = try! AnimationResource.generate(with: playerDefinition)
scene!.playAnimation(playerAnimation)
}
}
}
}
Thanks!
Hello!
We're having this issue in our app that is implementing multi room scan via RoomPlan, where the ARSession world origin is shifted to wherever the RoomCaptureSession is ran again (e.g in the next room)
To clarify a few point
We are using the RoomCaptureView, starting a new room using roomCaptureView.captureSession.run(configuration: captureSessionConfig) and stopping the room scan via roomCaptureView.captureSession.stop(pauseARSession: false)
We are re-using the same ARSession and, which is passed into the RoomCaptureView as so:
arSession = ARSession()
roomCaptureView = RoomCaptureView(frame: .zero, arSession: arSession)
Any clue why the AR world origin is reset? I need it to be consistent for storing frame camera position
Thanks!
Hi,
I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing?
Thank you very much for your consideration!
Below is my code;
App.swift
import SwiftUI
@main
private struct TrackingApp: App {
public init() {
...
}
public var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "AppSpace") {
ImmersiveView()
}
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State private var subscriptions: [EventSubscription] = []
public var body: some View {
RealityView { content in
/* LEFT HAND */
let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: .
let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)])
leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere)
leftHandIndexFingerEntity.generateCollisionShapes(recursive: true)
leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)])
leftHandIndexFingerEntity.name = "LeftHandIndexFinger"
content.add(leftHandIndexFingerEntity)
/* 3D RECTANGLE*/
let width: Float = 0.7
let height: Float = 0.35
let depth: Float = 0.005
let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)])
rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])
let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5])
rectangleEntity.generateCollisionShapes(recursive: true)
rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])])
rectangleEntity.name = "Rectangle"
rectangleAnchor.addChild(rectangleEntity)
content.add(rectangleAnchor)
/* Collision Handling */
let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in
print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)")
}
subscriptions.append(subscription)
}
}
}
I have a scene setup that uses places images on planes to mimic an RPG-style character interaction. There's a large scene background image and a smaller character image in the foreground. Both are added as content to a RealityView. There's one attachment that is a dialogue window for interaction with the character, and it is attached to the character image. When the scene changes, I need the images and the dialogue window to refresh. My current approach has been to remove everything from the scene and add the new content in the update closure.
@EnvironmentObject var narrativeModel: NarrativeModel
@EnvironmentObject var dialogueModel: DialogueViewModel
@State private var sceneChange = false
private let dialogueViewID = "dialogue"
var body: some View {
RealityView { content, attachments in
//at start, generate background image only and no characters
if narrativeModel.currentSceneIndex == -1 {
content.add(generateBackground(image: narrativeModel.backgroundImage!))
}
} update : { content, attachments in
print("update called")
if narrativeModel.currentSceneIndex != -1 {
print("sceneChange: \(sceneChange)")
if sceneChange {
//remove old entitites
if narrativeModel.currentSceneIndex != 0 {
content.remove(attachments.entity(for: dialogueViewID)!)
}
content.entities.removeAll()
//generate the background image for the scene
content.add(generateBackground(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].backgroundImage))
//generate the characters for the scene
let character = generateCharacter(image: narrativeModel.scenes[narrativeModel.currentSceneIndex].characterImage)
content.add(character)
print(content)
if let character_attachment = attachments.entity(for: "dialogue"){
print("attachment clause executes")
character_attachment.position = [0.45, 0, 0]
character.addChild(character_attachment)
}
}
}
} attachments: {
Attachment(id: dialogueViewID){
DialogueView()
.environmentObject(dialogueModel)
.frame(width: 400, height: 600)
.glassBackgroundEffect()
}
}
//load scene images
.onChange(of:narrativeModel.currentSceneIndex){
print("SceneView onChange called")
DispatchQueue.main.async{
self.sceneChange = true
}
print("SceneView onChange toggle - sceneChange = \(sceneChange)")
}
}
If I don't use the dialogue window, this all works just fine. If I do, when I click the next button (in another view), which increments the current scene index, I enter some kind of loop where the sceneChange value gets toggled to true but never gets toggled back to false (even though it's changed in the update closure). The reason I have the sceneChange value is because I need to update the content and attachments whenever the scene index changes, and I need a state variable to trigger the update function to do this. My questions are:
Why might I be entering this loop? Why would it only happen if I send a message in the dialogue view attachment, which is a whole separate view?
Is there a better way to be doing this?
Hello everyone,
I'm working on developing an app that allows users to share and enjoy experiences together while they are in the same physical locations. Despite trying several approaches, I haven't been able to achieve the desired functionality. If anyone has insights on how to make this possible or is interested in joining the project, I would greatly appreciate your help!
I seem to be running into an issue in an app I am working on were I am unable to update the IBL for entity more than once in a RealityKit scene. The app is being developed for visionOS.
I have a scene with a model the user interacts with and 360 panoramas as a skybox. These skyboxes can change based on user interaction. I have created an IBL for each of the skyboxes and was intending to swap out the ImageBasedLightComponent and ImageBasedLightReceiverComponent components when updating the skybox in the RealityView's update closure.
The first update works as expected but updating the components after that has no effect. Not sure if this is intended or if I'm just holding it wrong. Would really appreciate any guidance. Thanks
Simplified example
// Task spun up from update closure in RealityView
Task {
if let information = currentSkybox.iblInformation, let resource = try? await EnvironmentResource(named: information.name) {
parentEntity.components.remove(ImageBasedLightReceiverComponent.self)
if let iblEntity = content.entities.first(where: { $0.name == "ibl" }) {
content.remove(iblEntity)
}
let newIBLEntity = Entity()
var iblComponent = ImageBasedLightComponent(source: .single(resource))
iblComponent.inheritsRotation = true
iblComponent.intensityExponent = information.intensity
newIBLEntity.transform.rotation = .init(angle: currentPanorama.rotation, axis: [0, 1, 0])
newIBLEntity.components.set(iblComponent)
newIBLEntity.name = "ibl"
content.add(newIBLEntity)
parentEntity.components.set([
ImageBasedLightReceiverComponent(imageBasedLight: newIBLEntity),
EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0),
])
} else {
parentEntity.components.remove(ImageBasedLightReceiverComponent.self)
}
}
Hello.
When displaying a simple app like this:
struct ContentView: View {
var body: some View {
EmptyView()
}
}
And run the Leaks app from the developer tools in Xcode, I see a memory leak which I don't see when running the same application on iOS.
You can simply run the app and it will show a memory leak. And this is what I see in the Leaks application.
Any ideas on what is going on?
Thanks!
VStack(spacing: 8) {
}
.padding(20)
.frame(width: 320)
.glassBackgroundEffect()
.cornerRadius(10)
Hi,
I'm experimenting with how my visionOS app interacts with the Mac Virtual Display while the immersive space is active. Specifically, I'm trying to find out if my app can detect key presses or trackpad interactions (like clicks) when the Mac Virtual Display is in use for work, and my app is running in the background with an active immersive space.
So far, I've tested a head-tracking system in my app that works when the app is open with an active immersive space, where I just moved the Mac Virtual Display in front of the visionOS app window.
Could my visionOS app listen to keyboard and trackpad events that happen in the Mac Virtual Display environment?
Hi,
I was wondering if there are any possibilities similar to when connecting the AVP to e.g. a MacBook, one could somehow implement that the Mac Screen/Content would be displayed within the window of the app after opening the immersive space.
Thank you very much in advance for your help!
I have followed every step of all the instructions.
Nothing happens.
Did factory settings of both my Macbook Pro & Vision Pro
with the same apple ID.
Still Vision Pro doesn't appear.
I am using Model3D to display an RCP scene/model in my UI.
How can I get to the entities so I can set material properties to adjust the appearance?
I looked at interfaces for Model3D and ResolvedModel3D and could not find a way to get access to the RCP scene or RealityKit entity.
I would like to drag two different objects simultaneously using each hand.
In the following session (6:44), it was mentioned that such an implementation could be achieved using SpatialEventGesture():
https://developer.apple.com/jp/videos/play/wwdc2024/10094/
However, since targetedEntity.location3D obtained from SpatialEventGesture is of type Point3D, I'm having trouble converting it for moving objects. It seems like the convert method in the protocol linked below could be used for this conversion, but I'm not quite sure how to implement it:
https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting/
How should I go about converting the coordinates?
Additionally, is it even possible to drag different objects with each hand?
.gesture(
SpatialEventGesture()
.onChanged { events in
for event in events {
if event.phase == .active {
switch event.kind {
case .indirectPinch:
if (event.targetedEntity == cube1){
let pos = RealityViewContent.convert(event.location3D, from: .local, to: .scene) //This Doesn't work
dragCube(pos, for: cube1)
}
case .touch, .directPinch, .pointer:
break;
@unknown default:
print("unknown default")
}
}
}
}
)
Hey, I was reading through the Happy Beam intro website(https://developer.apple.com/documentation/visionos/happybeam) and I stumbled upon the info about Persona Preview Profile, that suppose to help with testing SharePlay on the device.
However, the link from the website points to 404- and I was curious if anyone knows what Persona Preview Profile is and how exactly can it help with testing SharePlay?
Where can I find more info about it?
Hi,
Is there a way to create an AnchorEntity that is attached to the window / WindowGroup of a visionOS app, so that there would be a box that aligns with the window?
Thanks for your help!
In Reality View, I want to move an entity A to the position of entity B, but I can't determine the coordinates of entity B (for example, entity B is tracking the hand). What's the solution?