I'm developing an ar app using reality kit and Arkit and i want to have my buttons in the same theme of vision os buttons thin , transparent background and round at corners.Following is the code i have written and need help with it
func createButton(label: String, position: SIMD3<Float>) -> ModelEntity {
let button = ModelEntity(mesh: .generateBox(size: [0.3, 0.1, 0.02], cornerRadius: 10), materials: [SimpleMaterial(color: .blue, isMetallic: false)])
button.generateCollisionShapes(recursive: true)
button.position = position
// Add button label
let buttonText = ModelEntity(mesh: .generateText(label, extrusionDepth: 0.005, font: .systemFont(ofSize: 0.05)))
buttonText.model?.materials = [SimpleMaterial(color: .white, isMetallic: true)]
buttonText.position = [-0.07, -0.02, 0.01]
button.addChild(buttonText)
return button
}
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Posts under RealityKit tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView.
I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085
I've included a crash log with the feedback.
If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior.
Please let me know if there's anything I can do to help.
Thank you.
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Hi,
When I attach BillboardComponent to anchor entities, I am no longer able to retrieve the tapped entity anymore because the collision shapes of the entity are messed up due to always orienting it towards the camera. And it does not updated the collision shapes because if I try pressing everywhere that is not my model entity, I get a hit out of nowhere.
I tried updating the collision shapes of the entity every frame:
for child in existingPassport.mainEntity!.children {
child.generateCollisionShapes(recursive: true)
}
However, nothing comes of it, and it is not a smart solution in the first places because it is too heavy to recreate the shapes every frame.
I am using the usual AR View Controller that works when I comment out the BillboardComponent line just fine:
private func setupTapRecognizer() {
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
arView.addGestureRecognizer(tapRecognizer)
}
@objc func handleTap(_ recognizer: UITapGestureRecognizer) {
print("handle tap URL 1")
let location = recognizer.location(in: arView)
if let entity = arView.entity(at: location) {
print("handle tap URL 2")
// Assuming each entity has a URL stored in a component
if let urlComponent = entity.components[URLComponent.self] {
webViewPresenter?.presentFullScreenWebView(url: urlComponent.url)
print("handle tap URL: \(urlComponent.url)")
}
}
}
How should we tackle this issue on iOS 18?
Thanks!
Hi, I am having some troubles creating a "nested" RealityView content using MapKit attachment.
I am building a visionOS app that has horizontal MapKit map as an attachment to RealityView. I want to display 3D pins on that map, therefore I am using native map annotation and inside of these annotations, I create a new RealityView just for the 3D pin. This worked completely fine, unitil I wanted to have those RealityViews interact with each other.
By interaction of those RealityViews I mean that I wanted to group entities from the first "main" RealityViews content with the 3D pins using ModelSortGroupComponent.
Why I want this? I want to make the map circular, that is not a problem. Problem is that when I move the map with 3D pins, these pins have their own RealityView space and are only bounded by volumetric window dimensions. What happes is that these pins float next to the map (shown on attached image). So I came up with this solution: create a custom "toroid" like 3D entity model that occludes the pins that go outside the map region. In order to occlude only the pins, I need to use ModelSortGroupComponent to group the "toroid" entity with 3D pins entities (as described in another forum thread).
To summarize: need the content of the superior RealityView to interact with map attachment annotations RealityView content in order to group them. There might be of course another, better way to achieve my whole goal, so I would naturally appreciate any help or guidance.
Image below showing 3D pins on circular map. Since pins RealityView does no know anything about other RealityViews, it just overlows and hangs in space until is cropped by volumetric window boundary.
Simplified code:
var body: some View {
let modelSortGroup = ModelSortGroup(depthPass: .prePass)
RealityView { content, attachments in
var mainEntity = Entity()
// My other entities here...
if let mapAttachment = attachments.entity(for: "mapAttachment") {
// Edit map properties, position, horizontal layout etc.
mainEntity.addChild(mapAttachment)
}
// Create and add to content mask "toroid" entity mapMaskEntity. Use OcclusionMaterial() material.
mapMaskEntity.components.set(ModelSortGroupComponent(group: modelSortGroup, order: 0))
// For all pins, somehow also set the group
// 3DPinEntity.components.set(ModelSortGroupComponent(group: modelSortGroup, order: 1))
content.add(mainEntity)
} attachments: {
Attachment(id: "mapAttachment") {
Map {
ForEach(mapViewModel.clusters, id: \.id) { cluster in
Annotation("", coordinate: cluster.coordinate) {
MapPin3DView(cluster: cluster)
}
}
}
.clipShape(Circle())
}
}
}
// MapPin3DView is an map annotation view that includes a model of 3D pin and some details like image etc., uses RealityView.
struct MapPin3DView: View {
var body: some View {
RealityView { content in
// 3D pin entities...
}
}
}
I’m developing an app using RealityKit and RealityView. On newer iPhones, such as the iPhone 15 Pro, Object Occlusion appears to be enabled by default, which causes 3D entities to be hidden behind real-world objects in the scene. However, I need to disable this behavior to ensure proper rendering of my 3D content.
This issue does not occur on older devices like the iPhone 13, where the app works as intended. I haven’t been able to find a solution to explicitly disable object occlusion on the newer devices for RealityView.
Any guidance or suggestions to resolve this issue would be greatly appreciated! Thanks!
In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
We have a plane model (basecircle)without physics and rigid body components, and no gestures are implemented. However, when tapped, the model unexpectedly falls into infinity.
func fetchEnvResource(){
var simpleMaterial = SimpleMaterial()
env = try! Entity.loadModel(named: "bgMain5")
env.position += [0,0,-10]
envTexture = PhysicallyBasedMaterial()
envTextureUnlit = UnlitMaterial()
envTexture.baseColor = .init(texture: .init(try! .load(named: "bgMain5")))
envTextureUnlit.color = .init(texture: .init(try! .load(named: "bgMain5")))
env.isEnabled = false
let anchor = AnchorEntity(world: [0, 0, -3])
baseCircle = ModelEntity(mesh: .generatePlane(width: 1.5, depth: 1.5, cornerRadius: 0.75), materials: [SimpleMaterial(color: .green, isMetallic: false)])
env.components.set(InputTargetComponent())
baseMaterial = PhysicallyBasedMaterial()
baseMaterialUnlit = UnlitMaterial()
baseMaterial.baseColor = .init(texture: .init(try! .load(named: "groundTexture")))
baseMaterial.baseColor.tint = UIColor(white: 1.0, alpha: CGFloat(textureOpacity))
baseCircle.model?.materials = [baseMaterial]
baseCircle.generateCollisionShapes(recursive: false)
baseCircle.components.set(InputTargetComponent())
baseCircle.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static))
baseCircle.physicsBody = PhysicsBodyComponent(
mode: .kinematic
)
anchorEntity.addChild(baseCircle)
baseCircle.position = [0,0,-3]
baseCircle.isEnabled = false
let cylinder = ModelEntity(mesh: .generateCylinder(height: 0.2, radius: 0.5), materials: [SimpleMaterial(color: .blue, isMetallic: false)])
cylinder.position = [0,-0.1,0]
cylinder.generateCollisionShapes(recursive: false)
cylinder.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static))
cylinder.physicsBody = nil
cylinder.scale = [500, 100, 100]
anchor.addChild(cylinder)
}
The plane model in this issue is the BaseCircle. Any suggestions on how to solve this or potential fixes would be greatly appreciated
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this?
Actual result (chart labels added in photoshop):
Expected:
Red > 0.1 Shader Graph
A large number of crashes were detected in the background when users were using Object Capture
Crash TXT
We're developing a VisionOS application, where we would like to do product recognition (like food items).
We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction.
Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item.
How do we transform this 2D coordinate from the model prediction to a 3D anchor?
Hi
Hopefully someone can share some ideas on how to accomplish this.
I know we can load models from realityKitContentBundle like
let model = try? await Entity(named: “testModel”, in: realityKitContentBundle)
But this is in the root of RealityKitContent.rkassets , if I have the models in some subfolder then I have to add the complete path like
let model = try? await Entity(named: “/superModels/testModel”, in: realityKitContentBundle)
What I want is to be able to search recursively in all folders for that file as I have several subfolders with different models.
Any suggestion ?
Thanks in advance.
Guillermo
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below.
This is a screen capture in composer pro but the same thing happens in the Vision Pro.
Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
When I try to open Immersive space I got error like below:-
HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
How to solve it any idea?
I have a visionOS app using immersive space with RealityView. The app adds RealityKit entities to the app's Scene instance, and uses raycast to find CollisionCastHits.
I want now to write a unit test to check if the app finds the right hits.
To do so, I have to access the Scene instance to add entities, and to check if they are hit by scene.raycast.
But how can I access the scene instance?
I can access it e.g. after creating the RealityView via its content parameter, or via @Environment(\.realityKitScene). But this seems not to be possible in a unit test.
I tried the following test function:
@MainActor @Test func test() async throws {
var scene: RealityKit.Scene?
await withCheckedContinuation { continuation in
_ = RealityView(make: { content in
print("make")
let entity = Entity()
content.add(entity)
scene = entity.scene
continuation.resume()
})
}
#expect(scene != nil)
}
But this logs
◇ Test test() started.
SWIFT TASK CONTINUATION MISUSE: test() leaked its continuation!
The reason is apparently that the make closure of RealityView is only called when SwiftUI calls it within the body of a SwiftUI View.
So, is it possible at all to access the app's scene i a unit test?
Hello,
I have a usdz model created in Maya. It's supposed to have a splashing animation associated with it, and this can be viewed in Maya and Blender, but for some reason when I export it to usdz and then import it into my Reality Composer Pro project, the animation is missing. I expect an animation library to be created on the entity when I drag it in like any other usdz with animation data, but that is not the case.
Any help would be appreciated on this issue.
I'm building a SwiftUI+RealityKit app for visionOS, macOS and iOS. The main UI is a diorama-like 3D scene which is shown in orthographic projection on macOS and as a regular volume on visionOS, with some SwiftUI buttons, labels and controls above and below the RealityView.
Now I want to add UI that is positioned relative to some 3D elements in the RealityView, such as a billboarded name label over characters with a "show details" button and such.
However, it seems the whole RealityView Attachments API is visionOS only? The types don't even exist on macOS. Why is it visionOS only? And how would I overlay SwiftUI elements over a RealityView using SwiftUI code on macOS if not with attachments?
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro.
Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
Reality Composer Pro question related to custom components
My custom component defines some properties to edit in RCP. Simple ones work find, but SIMD3 and SIMD2 do not. I'd expect to see default values but instead I get this 0s. If I try to run this the scene doesn't load. Once I enter some values it does and build and run again it works fine.
More generally, does Apple have documentation on creating properties for components? The only examples I've seen show simple strings and floats. There are no details about vectors, conditional options, grouping properties, etc.
public struct EntitySpawnerComponent: Component, Codable {
public enum SpawnShape: String, Codable {
case domeUpper
case domeLower
case sphere
case box
case plane
case circle
}
// These prooerties get their default values in RCP
/// The number of clones to create
public var Copies: Int = 12
/// The shape to spawn entities in
public var SpawnShape: SpawnShape = .domeUpper
/// Radius for spherical shapes (dome, sphere, circle)
public var Radius: Float = 5.0
// These properties DO NOT get their default values in RCP. The all show 0
/// Dimensions for box spawning (width, height, depth)
public var BoxDimensions: SIMD3<Float> = SIMD3(2.0, 2.0, 2.0)
/// Dimensions for plane spawning (width, depth)
public var PlaneDimensions: SIMD2<Float> = SIMD2(2.0, 2.0)
/// Track if we've already spawned copies
public var HasSpawned: Bool = false
public init() {
}
}
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.