Post

Replies

Boosts

Views

Activity

Turn off camera in RealityView for iOS?
I am using RealityView for an iOS program. Is it possible to turn off the camera passthrough, so only my virtual content is showing? I am looking to create VR experience. I have a work around where I turn off occlusion and then create a sphere around me (e.g., with a black texture), but in the pre-RealityView days, I think I used something like this: arView.environment.background = .color(.black) Is there something similar in RealityView for iOS? Here are some snippets of my current work around inside RealityView. First create the sphere to surround the user: // Create sphere let blackMaterial = UnlitMaterial(color: .black) let sphereMesh = MeshResource.generateSphere(radius: 100) let sphereModelComponent = ModelComponent(mesh: sphereMesh, materials: [blackMaterial]) let sphereEntity = Entity() sphereEntity.components.set(sphereModelComponent) sphereEntity.scale *= .init(x: -1, y: 1, z: 1) content.add(sphereEntity) Then turn off occlusion: // Turn off occlusion let configuration = SpatialTrackingSession.Configuration( tracking: [], sceneUnderstanding: [], camera: .back) let session = SpatialTrackingSession() await session.run(configuration)
1
0
322
Sep ’24
Casting shadows on the ground
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor. I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor. Do I need to enable something to have my character cast a show on the real-world floor?
1
0
345
Sep ’24
RealityView in macOS, Skybox, and lighting issue
I am testing RealityView on a Mac, and I am having troubles controlling the lighting. I initially add a red cube, and everything is fine. (see figure 1) I then activate a skybox with a star field, the star field appears, and then the red cube is only lit by the star field. Then I deactivate the skybox expecting the original lighting to return, but the cube continues to be lit by the skybox. The background is no longer showing the skybox, but the cube is never lit like it originally was. Is there a way to return the lighting of the model to the original lighting I had before adding the skybox? I seem to recall ARView's environment property had both a lighting.resource and a background, but I don't see both of those properties in RealityViewCameraContent's environment. Sample code for 15.1 Beta (24B5024e), Xcode 16.0 beta (16A5171c) struct MyRealityView: View { @Binding var isSwitchOn: Bool @State private var blueNebulaSkyboxResource: EnvironmentResource? var body: some View { RealityView { content in // Create a red cube 10cm on a side let mesh = MeshResource.generateBox(size: 0.1) let simpleMaterial = SimpleMaterial(color: .red, isMetallic: false) let model = ModelComponent( mesh: mesh, materials: [simpleMaterial] ) let redBoxEntity = Entity() redBoxEntity.components.set(model) content.add(redBoxEntity) // Load skybox let blueNeb2Name = "BlueNeb2" blueNebulaSkyboxResource = try? await EnvironmentResource(named: blueNeb2Name) } update: { content in if (blueNebulaSkyboxResource != nil) && (isSwitchOn == true) { content.environment = .skybox(blueNebulaSkyboxResource!) } else { content.environment = .default } } .realityViewCameraControls(CameraControls.orbit) } } Figure 1 (default lighting before adding the skybox): Figure 2 (after activating skybox with star field; cube is lit by / reflects skybox): Figure 3 (removing skybox by setting content.environment to .default, cube still reflects skybox; it is hard to see):
1
0
373
Aug ’24
WKWebView for general purpose web browser
I created a simple web browser using WKWebView, but as far as I can tell, there is not a way to auto-populate credentials or save credentials a user enters into a login form at a 3rd-party website like Netflix (i.e., not my own app domain). Is this correct? If this is wrong, what are the APIs to support this? My use case is that I want to create an immersive app in visionOS that includes a window that lets the user surf the web (among other things). Ideally, I could just use a Safari window in my immersive app, but I don't think this is possible either. My work around is to create my own web browser... which works, minus the credential issue. Is it possible to bring a Safari window into an immersive visionOS app's experience? (IMHO, that would be a great feature)
0
0
424
Jul ’24
App Environment SkyDome's UV values
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318. Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map). Is this the standard UV mapping for half a SkyDome? It has caused some issues when I've applied some HDRIs.
1
0
460
Jul ’24
Getting the Wi-Fi's SSID on macOS
I want to extend an existing macOS app distributed through the Mac App Store with the capability to track the Wi-Fi's noise and signal strength along with the SSID it is connected to over time. Using CWWiFiClient.shared().interface(), I can get noiseMeasurement() and rssiValue() fine, but ssid() always returns nil. I am assuming this is a privacy issue (?). Are there specific entitlements I can request or ways to prompt the user to grant the app privilege to access the SSID values?
1
0
622
Jul ’24
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
1
1
696
Jun ’24
Triangle count and texture size budget for RealityKit on visionOS
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
2
0
966
May ’24
Portion of Canvas that is visible in ScrollView?
I have a Canvas inside a ScrollView on a Mac. The Canvas's size is determined by a model (for the example below, I am simply drawing a grid of circles of a given radius). Everything appears to works fine. However, I am wondering if it is possible for the Canvas rendering code to know what portion of the Canvas is actually visible in the ScrollView? For example, if the Canvas is large but the visible portion is small, I would like to avoid drawing content that is not visible. Is this possible? Example of Canvas in a ScrollView I am using for testing: struct MyCanvas: View { @ObservedObject var model: MyModel var body: some View { ScrollView([.horizontal, .vertical]) { Canvas { context, size in // Placeholder rendering code for row in 0..<model.numOfRows { for col in 0..<model.numOfColumns { let left: CGFloat = CGFloat(col * model.radius * 2) let top: CGFloat = CGFloat(row * model.radius * 2) let size: CGFloat = CGFloat(model.radius * 2) let rect = CGRect(x: left, y: top, width: size, height: size) let path = Circle().path(in: rect) context.fill(path, with: .color(.red)) } } } .frame(width: CGFloat(model.numOfColumns * model.radius * 2), height: CGFloat(model.numOfRows * model.radius * 2)) } } }
0
0
311
May ’24
visionOS 3D tap location offset by ~0.35m?
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
5
0
1.1k
Apr ’24
Getting to MeshAnchor.MeshClassification from MeshAnchor?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
2
0
881
Mar ’24
SpatialTapGesture and collision surface's normal?
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing: let location3D = value.convert(value.location3D, from: .global, to: .scene) What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with? My goal is to place an object with its Y-axis along the normal of the surface that was tapped. For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it). Is this possible? My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information. I am hoping Apple has a simple function call that is more general than my fallback approach.
1
0
574
Mar ’24
Occlusion material and progressive ImmersiveSpace
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black. In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content? Basically, I want to punch a hole through the virtual content and see the room behind it. As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard. Is this possible?
0
0
482
Feb ’24
Portals and ImmersiveSpace?
I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace. For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it. So far, the portal isn't showing up. Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
1
0
555
Feb ’24