visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,229 Posts
Sort by:
Post not yet marked as solved
1 Replies
147 Views
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
Posted Last updated
.
Post not yet marked as solved
0 Replies
115 Views
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene. I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious. When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene). Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene. Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful. struct RKMemTestApp: App { private var viewModel = ViewModel() var body: some Scene { WindowGroup { ContentView() .environment(viewModel) } ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environment(viewModel) } } } Add this above the body in ContentView: @Environment(ViewModel.self) private var viewModel The ContentView body should be: VStack { Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace) .font(.title) .frame(width: 360) .padding(24) .glassBackgroundEffect() Button("Load Models") { viewModel.loadModels() } Button("Unload Models") { viewModel.unloadModels() } } ImmersiveView: struct ImmersiveView: View { @Environment(ViewModel.self) private var viewModel var body: some View { RealityView { content in if let rootEntity = viewModel.rootEntity { content.add(rootEntity) } } update: { content in if viewModel.rootEntity == nil && !content.entities.isEmpty { content.entities.removeAll() } else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty { content.add(rootEntity) } } } } ViewModel: import Foundation import Observation import RealityKit import RealityKitContent @Observable class ViewModel { var rootEntity: Entity? init() { } func loadModels() { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { Task { @MainActor in if rootEntity == nil { rootEntity = Entity() } rootEntity!.addChild(scene) } } } /*if rootEntity == nil { rootEntity = Entity() } for _ in 0..<1000 { let mesh = MeshResource.generateSphere(radius:0.1) let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)] rootEntity!.addChild(entity) }*/ } func unloadModels() { rootEntity?.children.removeAll() rootEntity?.removeFromParent() rootEntity = nil } }
Posted
by KGraus.
Last updated
.
Post marked as solved
1 Replies
109 Views
I am trying to launch openImmersiveSpace, but seem like there is an issue with the openImmersiveSpace Task. Error: Static method 'buildExpression' requires that 'Task<OpenImmersiveSpaceAction.Result, Never>' conform to 'View' Here is the code and the error shows up on the "Task" line. import SwiftUI import RealityKit import RealityKitContent struct TestView: View { @Environment(\.openImmersiveSpace) var openImmersiveSpace @Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace var body: some View { VStack{ Text("Open Full Immersive & switch to NextViewArea") NavigationLink { Task { await openImmersiveSpace(id: "ImmersiveSpace") } NextViewArea() } label:{ Label(" Enter Full Immersive Space") } } } } How can I move onto the next view area in the floating window while also launching full immersive space. Any help would be much appreciated.
Posted Last updated
.
Post not yet marked as solved
4 Replies
600 Views
Hi, My app launches with a mixed immersive space. the Preferred Default Scene Session Role is set to Immersive Space Application Session Role. ImmersiveSpace(id: "sceneSpace"){ ImmersiveView() .environmentObject(modelObject) }.immersionStyle(selection: .constant(.mixed), in: .mixed) Other WindowGroups are opened too. Problem: When the x button (bottom left corner) is tapped on any WindowGroup the immersive space is dismissed. When the user opens the app again the immersive space is gone. The same happens when the user opens the Home Screen. How can I keep the same immersive space when the app is opened again. Thank you!
Posted
by gebs.
Last updated
.
Post not yet marked as solved
0 Replies
86 Views
Hi, guys. I am preparing to develop a Vision Pro app with Unity. The Play to Device, which connects Unity Engine and Vision Pro, worked well, and there was no problem with the connection with Vision Pro simulator. But when I tried to connect Xcode and Vision Pro, I couldn't see Vision Pro itself in the device list. (The iPhone 11, which was wired as a test, recognizes well.) I looked up the forum and it was simple to connect. The link to the post I found is below. https://forums.developer.apple.com/forums/thread/746464 I don't know why it's not working even though I look up YouTube. Leaving my work environment, I'd appreciate it if you could leave a helpful answer. MacBook : M2 MacBook Xcode Ver. : 15.3 VisionPro Ver. : 1.1.2 Developer accounts: All use the same Apple developer account
Posted Last updated
.
Post not yet marked as solved
1 Replies
144 Views
I need to obtain data through mqtt and subscription? Is there any idea or framework ? Think you
Posted
by junp.
Last updated
.
Post not yet marked as solved
0 Replies
164 Views
Environment Apple Silicon M1 Pro macOS 14.4 Xcode 15.3 (15E204a) visionOS simulator 1.1 Step Create a new visionOS app project and compile it through xcodebuild: xcodebuild -destination "generic/platform=visionOS" It fails on RealityAssetsCompile with log : error: Failed to find newest available Simulator runtime But if I open the Xcode IDE and start building, it works fine. This error only occurs in xcodebuild. More I noticed that in xcrun simctl list the vision pro simulator is in unavailable state: -- visionOS 1.1 -- Apple Vision Pro (6FB1310A-393E-4E82-9F7E-7F6D0548D136) (Booted) (unavailable, device type profile not found) And i can't find the vision pro device type in xcrun simctl list devicetypes, does it matter? I have tried to completely reinstall Xcode and simulator runtime, but still the same error.
Posted
by LZephyr.
Last updated
.
Post not yet marked as solved
3 Replies
212 Views
Today I have tried to add a second archive action for visionOS. I had added a visionOS destination to my app target a while back and can build and archive my app for visionOS in Xcode 15.3 locally, and also run it on the device. Xcode Cloud is giving me the following errors in the Archive - visionOS action (Archive - iOS works): Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle MyApp.app is invalid. Invalid sdk value. The value provided for the sdk portion of LC_BUILD_VERSION in MyApp.app/MyApp is 17.4 which is greater than the maximum allowed value of 1.2. This bundle is invalid. The value provided for the key MinimumOSVersion '17.0' is not acceptable. Type Mismatch. The value for the Info.plist key CFBundleIcons.CFBundlePrimaryIcon is not of the required type for that key. See the Information Property List Key Reference at https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40009248-SW1 All 4 errors are annotated with "Prepare Build for App Store Connect" and I get them for both "TestFlight (Internal Testing Only)" and "TestFlight and App Store" deployment preparation options. I have tried to remove the visionOS destination and add it back, but this is not changing the project at all. Any ideas what I am missing?
Posted
by RK123.
Last updated
.
Post not yet marked as solved
0 Replies
129 Views
Hi Guys, I would like to ask if anyone knows the FPS of screen recording and airplay on Vision Pro. Airplay refers to mirroring the Vision Pro view to MacBook/iPhone/iPad. Also, is there any way to record the screen with the raw FPS of Vision Pro (i.e., 90)?
Posted
by felixYS.
Last updated
.
Post not yet marked as solved
1 Replies
285 Views
Hello. I have a model of a CD record and box, and I would like to change the artwork of it via an external image URL. My 3D knowledge is limited, but what I can say is that the RealityView contains the USDZ of the record, which in turn contains multiple materials: ArtBack, ArtFront, PlasticBox, CD. How do I target an artwork material and change it to another image? Here is the code so far. RealityView { content in do { let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle) entity.scale = SIMD3<Float>(repeating: 0.6) content.add(entity) } catch { ProgressView() } }
Posted
by mdkBsenA.
Last updated
.
Post not yet marked as solved
1 Replies
185 Views
Since camera access is not allowed right now, does Apple have the same restriction on screenshot? What I am trying to do is, I would like to have my user take a screenshot, then my app will detect and read this screenshot to process information (without letting my user to select and upload manually) automatically. But I did not find Vision Pro documentation about this, should I check SwiftUI or other developer documentation?
Posted
by JPluie.
Last updated
.
Post not yet marked as solved
1 Replies
147 Views
I am working for a team developing solutions for HMD(Meta and others). We are exploring feasibility for development of solutions for Apple Vision Pro from India. Could you suggest the prerequisites to begin development. Also please confirm if there any regional constraints in development on Vision OS.
Posted
by sparxgk.
Last updated
.
Post not yet marked as solved
0 Replies
122 Views
I'm on VisionOS 1.2 beta and Instruments will capture everything but RealityKit information. RealityKit frames and RealityKit metrics captures no data. This used to work though I'm not sure what version it did. Unbelievably frustrating.
Posted Last updated
.
Post marked as solved
2 Replies
162 Views
Hi team, I'm running into the following issue, for which I don't seem to find a good solution. I would like to be able to drag and drop items from a view into empty space to open a new window that displays detailed information about this item. Now, I know something similar has been flagged already in this post (FB13545880: Support drag and drop to create a new window on visionOS) HOWEVER, all this does, is launch the App again with the SAME WindowGroup and display ContentView in a different state (show a selected product e.g.). What I would like to do, is instead launch ONLY the new WindowGroup, without a new instance of ContentView. This is the closest I got so far. It opens the desired window, but in addition it also displays the ContentView WindowGroup WindowGroup { ContentView() .onContinueUserActivity(Activity.openWindow, perform: handleOpenDetail) } WindowGroup(id: "Detail View", for: Reminder.ID.self) { $reminderId in ReminderDetailView(reminderId: reminderId! ) } .onDrag({ let userActivity = NSUserActivity(activityType: Activity.openWindow) let localizedString = NSLocalizedString("DroppedReminterTitle", comment: "Activity title with reminder name") userActivity.title = String(format: localizedString, reminder.title) userActivity.targetContentIdentifier = "\(reminder.id)" try? userActivity.setTypedPayload(reminder.id) // When setting the identifier let encoder = JSONEncoder() if let jsonData = try? encoder.encode(reminder.persistentModelID), let jsonString = String(data: jsonData, encoding: .utf8) { userActivity.userInfo = ["id": jsonString] } return NSItemProvider(object: userActivity) }) func handleOpenDetail(_ userActivity: NSUserActivity) { guard let idString = userActivity.userInfo?["id"] as? String else { print("Invalid or missing identifier in user activity") return } if let jsonData = idString.data(using: .utf8) { do { let decoder = JSONDecoder() let persistentID = try decoder.decode(PersistentIdentifier.self, from: jsonData) openWindow(id: "Detail View", value: persistentID) } catch { print("Failed to decode PersistentIdentifier: \(error)") } } else { print("Failed to convert string to data") } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
87 Views
Is there a maximum distance at which an entity will register a TapGesture()? I'm unable to interact with entities farther than 8 or 9 meters away. The below code generates a series of entities progressively farther away. After about 8 meters, the entities no long respond to tap gestures. RealityView { content in var body: some View { RealityView { content in for i in 0..<10 { if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) immersiveContentEntity.position = SIMD3<Float>(x: Float(-i*i), y: 0.75, z: Float(-1*i)-3) } } } .gesture(tap) } var tap: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in AudioServicesPlaySystemSound(1057) print(value.entity.name) } } }
Posted
by smicker.
Last updated
.
Post not yet marked as solved
0 Replies
127 Views
Hi! I was trying to port our sdk for visionOS. I was going through the documentation and saw this video: https://developer.apple.com/videos/play/wwdc2023/10089/ Is there any working code sample for it, same goes for arkit c api ? Couldn't find any links. Thanks in advance. Sahil
Posted
by saagn.
Last updated
.
Post not yet marked as solved
0 Replies
127 Views
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand. (I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
0 Replies
100 Views
Good day. I'm inquiring if there is a way to test functionality between Apple Pencil Pro and Apple Vision Pro? I'm trying to work on an idea that would require a tool like the Pencil as an input device. Will there be an SDK for this kind of connectivity?
Posted
by MrDanger.
Last updated
.