Posts

Post not yet marked as solved
1 Replies
258 Views
Hello, I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list. If I run just the scene reconstruction provider by itself, then I do get mesh updates. Is this a bug? Is it expected that it's not possible to do this? Thank you
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
1 Replies
309 Views
How persistent is the storage of the WorldTrackingProvider and its underlying world map reconstruction? The documentation mentions town-to-town anchor recovery, and recovery between sessions, but is that including device restarts and app quits? There are no clues about how persistent it all is.
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
0 Replies
324 Views
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.) I've placed an AnchorEntity in my scene like so: AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous) In my RealityView update closure, I print out this entity's position relative to "nil" like so: wallAnchor.position(relativeTo: nil) Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away. I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space. How can I bridge the gap between these two? WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity. Thank you
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
2 Replies
331 Views
Hello, I’ve got a few questions about drag gestures on VisionOS in Immersive scenes. Once a user initiates a drag gesture are their eyes involved anymore in the gesture? If not and the user is dragging something farther away, how far can they move it using indirect gestures? I assume the user’s range of motion is limited because their hands are in their lap, so could they move something multiple meters along a distant wall? How can the user cancel the gesture If they don’t like the anticipated / telegraphed result? I’m trying to craft a good experience and it’s difficult without some of these details. I have still not heard back on my devkit application. Thank you for any help.
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
1 Replies
298 Views
Is it possible to edit a SwiftData document in an immersive scene? If so... how? At the moment I see that the modelContext is available in the contentView of a documentGroup, but can Document Data be made available to an Immserive scene's content?
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
0 Replies
347 Views
On Xcode 15.1.0b2 when rayacsting to a collision surface, there appears to be a tendency for the collisions to be inconsistent. Here are my results. Green cylinders are hits, and red cylinders are raycasts that returned no collision results. NOTE: This raycast is triggered by a tap gesture recognizer registering on the cube... so it's weird to me that the tap would work, but the raycast not collide with anything. Is this something that just performs poorly in the simulator? My RayCasting command is: guard let pose = self.arSessionController.worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { print("FAILED TO GET POSITION") return } let transform = Transform(matrix: pose.originFromAnchorTransform) let locationOfDevice = transform.translation let raycastResult = scene.raycast(from: locationOfDevice, to: destination, relativeTo: nil) where destination is retrieved in a tap gesture handler via: let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) Any findings would be appreciated.
Posted
by J0hn.
Last updated
.
Post marked as solved
5 Replies
558 Views
The Location3D that is returned by a SpatialTapGesture does not return normal vector information. This can make it difficult to orient an object that's placed at that location. Am I misusing this gesture or is this indeed the case? As an alternative I was thinking I could manually raycast toward the location the user tapped, but to do that, I need two points. One of those points needs to be the location of the device / user's head in world space and I'm not familiar how to get that information. Has anyone achieved something like this?
Posted
by J0hn.
Last updated
.
Post marked as solved
2 Replies
573 Views
The Goal My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped. In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do: let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) The Problem Now, I notice that my entities aren't oriented correctly: The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off. If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct? I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me. So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation? What I tried: I've tried investigating the tap-target-entity like so: let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation I believe this returns the rotation of the "plane entity" the user tapped, relative to the world. While that get's me the following, I'm not sure if it's useful? rotationRelativeToWorld: ▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14)) ▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068) If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!
Posted
by J0hn.
Last updated
.
Post marked as solved
2 Replies
828 Views
I am trying to make a world anchor where a user taps a detected plane. How am I trying this? First, I add an entity to a RealityView like so: let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous) anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0)) let interactionEntity = Entity() interactionEntity.name = "PLANE" let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)]) interactionEntity.components.set(collisionComponent) interactionEntity.components.set(InputTargetComponent()) anchor.addChild(interactionEntity) content.add(anchor) This: Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking Makes an empty entity and gives it a 2m by 2m by 2cm collision box Attaches the collision entity to the anchor Finally then adds the anchor to the scene It appears in the scene like this: Great! Appears to sit right on the wall. I then add a tap gesture recognizer like this: SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in guard value.entity.name == "PLANE" else { return } var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation) let worldAnchor = WorldAnchor(transform: simd_float4x4(pose)) let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)]) model.transform = Transform(matrix: worldAnchor.transform) realityViewContent?.add(model) I ASSUME This: Makes a world position from the where the tap connects with the collision entity. Integrates the position and the collision plane's rotation to create a Pose3D. Makes a world anchor from that pose (So it can be persisted in a world tracking provider) Then I make a basic cube entity and give it that transform. Weird Stuff: It doesn't appear on the plane.. it appears behind it... Why, What have I done wrong? The X and Y of the tap location appears spot on, but something is "off" about the z position. Also, is there a recommended way to debug this with the available tools? I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
Posted
by J0hn.
Last updated
.
Post marked as solved
1 Replies
492 Views
Hello, I'm curious if anyone has some useful debug tools for out-of-bounds issues with Volumes. I am opening a volume with a size of 1m, 1m, 10cm. I am adding a RealityView with a ModelEntity that is 0.5m tall and I am seeing the model clip at the top and bottom. I find this odd, because I feel like it should be within the size of the Volume.... I was curious what size SwiftUI says the Volume's size is so I tried using a GeometryReader3D to tell me.... GeometryReader3D { proxy in VStack { Text("\(proxy.size.width)") Text("\(proxy.size.height)") Text("\(proxy.size.depth)") } .padding().glassBackgroundEffect() } Unfortunately I get, 680, 1360, and 68. I'm guessing these units are in points, but that's not very helpful. The documentation says to use real-world units for Volumes, but none of the SwiftUI frame setters and getters appear to support different units. Is there a way to convert between the two? I'm not clear if this is a bug or a feature suggestion.
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
1 Replies
319 Views
Hello, With the advent of widget interactivity, in order to support state management, I'd like to differentiate one widget from another, even if they share the same configuration. Is this possible? Many of my search results are turning up iOS 15 era information, and I am not sure if that's still valid. Thank you
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
0 Replies
577 Views
Attempting to launch a widget in Debug mode on Sonoma from Xcode 15 is failing with the following message: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) Looking in console I see this message: macOSTaskPolicy: (com.apple.debugserver) may not get the task control port of (MacGalleryWidget) (pid: 1851): (MacGalleryWidget) is hardened, (MacGalleryWidget) doesn't have get-task-allow, (com.apple.debugserver) is a declared debugger(com.apple.debugserver) is not a declared read-only debugger What Xcode settings should I be looking at to rectify this? I suspect I may have something that's out of whack.
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
1 Replies
634 Views
First, I start add the provider to the session: do { if WorldTrackingProvider.isSupported { try await session.run([worldTracking]) print("World Tracking Provider Started.") } else { print("World Tracking not supported >.>") } } catch { print("ARKitSession error:", error) } Then I try to add a world anchor: var task: Task<Void, Never>? func trackAnchor(_ anchor: WorldAnchor) { task = Task { do { try await self.worldTracking.addAnchor(anchor) print("Added anchor to tracking provider!") } catch { print("Error: \(error)") } } } The awaited call never finishes. A breakpoint is not hit and errors are not thrown. As such, when the app is quit and restarted, the system does not recover the tracked world anchor. Any ideas?
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
0 Replies
550 Views
World Anchor from SpatialTapGesture ?? At 19:56 in the video, it's mentioned that we can use a SpatialTapGesture to "identify a position in the world" to make a world anchor. Which API calls are utilized to make this happen? World anchors are created with 4x4 matrices, and a SpatialTapGestures doesn't seem to generate one of those. Any ideas?
Posted
by J0hn.
Last updated
.