In visionOS, once an immersive space is opened, the background color is solid black, is it possible to make this background transparent?
FYI, The Immersive spaces on visionOS uses Compositor Services for drawing 3D content.
AR / VR
RSS for tagDiscuss augmented reality and virtual reality app capabilities.
Posts under AR / VR tag
109 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have recently started testing ARKit on an iPhone 16 Pro and I have noticed that the AutoFocus reaction on this device is much slower than other devices. For example, if I point the camera to a close object AutoFocus takes 4-5 seconds to stabilize, the focal length is adjusted very very slowly. In some cases (although this is rare) AutoFocus seems almost stuck and requires a bit of device movement to trigger.
This is quite problematic when using some ARKit features like Image and Object detection as the detection algorithms struggle with out-of-focus images.
This problem is limited to ARKit. AutoFocus is significantly more responsive when the standard AVFoundation Camera API is used.
This behavior is easy to reproduce with any of the ARKit samples like https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/tracking_and_visualizing_planes
Is anybody else experiencing this problem?
Hi team I don't know from this error has popped up any suggestions please
Thanks
Zippy Games
Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below)
When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop.
However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore.
To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug.
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Put skybox here. See example in World project available at
// https://developer.apple.com/
if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") {
let ent = Entity()
immersiveContentEntity.addChild(ent)
let controller = ent.prepareAudio(audioResource)
controller.play()
}
}
}
I wonder why this happen? I mean how should I keep the music playing when I close the .plain window?
Thanks!
Platform: iOS18
Tech: RealityView
Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions.
I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code.
So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place.
Thanks!
Hey Everyone, Happy New Year!
I wanted to see if you have seen this before. I have added an attachment to the RealityView as a child on an entity that has a Billboard component set on it. I wanted to create the effect that the attachment is offset by .5 meters from center and follows the device as you move around it. IT works great until you try click a button.
The attachment moves with the billboard, but the collision box around the attachment is not following it. If I position myself perfectly it works.
Video Example: https://youtu.be/4d9Vx7K8MmU
//
// ImmersiveView.swift
// Billboard Attachment
//
// Created by Justin Leger on 1/3/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var rootEntity = Entity()
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
let sphereEntity = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .red, roughness: 1, isMetallic: false)])
sphereEntity.position = [0.0, 1.0, -2.0]
let controlsPivotEntity = Entity()
controlsPivotEntity.components[BillboardComponent.self] = .init()
// Extract the attachemnt entity and disable it before its used.
if let controlsViewAttachmentEntity = attachments.entity(for: PlacedThingControls.attachmentId) {
controlsViewAttachmentEntity.position.z = 0.5
controlsPivotEntity.addChild(controlsViewAttachmentEntity)
sphereEntity.addChild(controlsPivotEntity)
}
content.add(sphereEntity)
}
attachments: {
Attachment(id: PlacedThingControls.attachmentId) {
PlacedThingControls()
}
}
}
}
#Preview(immersionStyle: .mixed) {
ImmersiveView()
.environment(AppModel())
}
struct PlacedThingControls: View {
static let attachmentId = "placed-thing-3D-controls"
var body: some View {
VStack {
HStack(spacing: 0) {
Button {
print("🗺️🗺️🗺️ Map selected pieces")
} label: {
Text("\(Image(systemName: "plus.square.dashed")) Manage Mesh Maps")
.fontWeight(.semibold)
.frame(maxWidth: .infinity)
}
.padding(.leading, 20)
Spacer()
Button(role: .destructive) {
print("🗑️🗑️🗑️ Delete selected pieces")
} label: {
Label {
Text("Delete")
} icon: {
Image(systemName: "trash")
}
.labelStyle(.iconOnly)
}
.padding(.trailing, 20)
}
.padding(.vertical)
.frame(minWidth: 320, maxWidth: 480)
}
.glassBackgroundEffect()
}
}
Hi everyone,
I’m currently developing an app using Apple’s RoomPlan framework, and so far, everything is working great! However, I’d like to extend the functionality to include scanning smaller objects, such as light switches or power outlets, in addition to the walls and larger furniture that RoomPlan already supports.
From what I understand, based on the documentation, RoomPlan doesn’t natively support the detection or measurement of smaller objects like these. Is that correct?
If that’s the case, does anyone have suggestions or ideas on how this could be achieved? Perhaps by integrating another framework or technology alongside RoomPlan?
I’d appreciate any insights or advice from those who have worked on similar use cases.
Thanks in advance!
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Hi,
When I attach BillboardComponent to anchor entities, I am no longer able to retrieve the tapped entity anymore because the collision shapes of the entity are messed up due to always orienting it towards the camera. And it does not updated the collision shapes because if I try pressing everywhere that is not my model entity, I get a hit out of nowhere.
I tried updating the collision shapes of the entity every frame:
for child in existingPassport.mainEntity!.children {
child.generateCollisionShapes(recursive: true)
}
However, nothing comes of it, and it is not a smart solution in the first places because it is too heavy to recreate the shapes every frame.
I am using the usual AR View Controller that works when I comment out the BillboardComponent line just fine:
private func setupTapRecognizer() {
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
arView.addGestureRecognizer(tapRecognizer)
}
@objc func handleTap(_ recognizer: UITapGestureRecognizer) {
print("handle tap URL 1")
let location = recognizer.location(in: arView)
if let entity = arView.entity(at: location) {
print("handle tap URL 2")
// Assuming each entity has a URL stored in a component
if let urlComponent = entity.components[URLComponent.self] {
webViewPresenter?.presentFullScreenWebView(url: urlComponent.url)
print("handle tap URL: \(urlComponent.url)")
}
}
}
How should we tackle this issue on iOS 18?
Thanks!
I’m developing an app using RealityKit and RealityView. On newer iPhones, such as the iPhone 15 Pro, Object Occlusion appears to be enabled by default, which causes 3D entities to be hidden behind real-world objects in the scene. However, I need to disable this behavior to ensure proper rendering of my 3D content.
This issue does not occur on older devices like the iPhone 13, where the app works as intended. I haven’t been able to find a solution to explicitly disable object occlusion on the newer devices for RealityView.
Any guidance or suggestions to resolve this issue would be greatly appreciated! Thanks!
Hi guys, I have set size of cube is 20 cm. Then i want to transform scale with values x: 0.048, y: 1, z: 0.0.48, But strangely it reset all value to x: 1, y: 1, and z:1. Then i want to try with another decimal such as x: 1.2, y: 2, z: 1.2 the panel will automatically reset it into x: 1, y: 1, z: 1. My question is how to set transform scale that can accept decimal number ?
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro.
Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
Hello everyone,
Since last night, the Object Capture feature in my app has stopped working. Whenever I try to use it, a blank screen is displayed instead of the expected functionality.
I’ve also tested several other apps that rely on Object Capture, and they are experiencing the same issue. This makes me think it might not be a problem specific to my device or app.
I’ve already tried restarting my device and ensuring all apps are up to date, but the issue persists.
Does anyone have more information about this issue? If so, is there any update on when it might be resolved?
Thank you in advance for your help!
Best regards
Hi, every one!
I'm trying to bind timeline(animation + audio) and behaviors on an entity in reality composer pro.
In xcode, I need to clone this entity and use the behavior, but I found that the behaviors are not clone(send notification but not received by reality composer pro and not execute the timeline).
How can I solve this problem? Thanks!
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS.
The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
How do I submit a ticket to Apple?
When using ARView of RealityKit, I can code like this let results = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any) to get the 3D position of where I tap on the plane. In iOS 18, we can use RealityView and I found that unproject(_:from:to:ontoPlane:) may implement the same function, but I don't know how to set the ontoPlane parameter.
Can someone help me with some code snippets?
I have this code to make ARVR Stereo View To Be Used in VR Box Or Google Cardboard, it uses iOS 18 New RealityView but for some reason the left side showing the Entity (Box) more near to the camera than the right side which make it not identical, I wonder is this a bug and need to be fixed or what ? thanx
Here is the code
import SwiftUI
import RealityKit
struct ContentView : View {
let anchor1 = AnchorEntity(.camera)
let anchor2 = AnchorEntity(.camera)
var body: some View {
HStack (spacing: 0){
MainView(anchor: anchor1)
MainView(anchor: anchor2)
}
.background(.black)
}
}
struct MainView : View {
@State var anchor = AnchorEntity()
var body: some View {
RealityView { content in
content.camera = .spatialTracking
let item = ModelEntity(mesh: .generateBox(size: 0.25), materials: [SimpleMaterial()])
anchor.addChild(item)
content.add(anchor)
anchor.position.z = -1.0
anchor.orientation = .init(angle: .pi/4, axis:[0,1,1])
}
}
}
And Here is the View
I have an issue using RealityView to show two screens of AR, while I did succeed to make it as a non AR but now my code not working.
Also it is working using Storyboard and Swift with SceneKit, so why it is not working in RealityView?
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
HStack (spacing: 0){
MainView()
MainView()
}
.background(.black)
}
}
struct MainView : View {
@State var anchor = AnchorEntity()
var body: some View {
RealityView { content in
let item = ModelEntity(mesh: .generateBox(size: 0.2), materials: [SimpleMaterial()])
content.camera = .spatialTracking
anchor.addChild(item)
anchor.position = [0.0, 0.0, -1.0]
anchor.orientation = .init(angle: .pi/4, axis:[0,1,1])
// Add the horizontal plane anchor to the scene
content.add(anchor)
}
}
}
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18.
Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18:
import SwiftUI
import RealityKit
struct PassportRealityView: View {
let qrCodeCenter: SIMD3<Float>
let assetID: String
var body: some View {
RealityView { content, attachments in
// Setup your AR content, such as markers or 3D models
if let qrAnchor = try? await Entity(named: "QRAnchor") {
qrAnchor.position = qrCodeCenter
content.add(qrAnchor)
}
} attachments: {
Attachment(id: "passportTextAttachment") {
Text(assetID)
.font(.title3)
.foregroundColor(.white)
.background(Color.black.opacity(0.7))
.padding(5)
.cornerRadius(5)
}
}
.frame(width: 300, height: 400)
}
}
When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit.
As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
In visionOS, ARKit is to integrate virtual and reality. However, most of the functions RealityKit can be easily implemented (except for Scene reconstruction, Room Tracking and enterprise API), so do I still need to use ARKit? Is there any difference between them?