My goal is to pin an attachment view precisely at the point where I tap on an entity using SpatialTapGesture. However, the current code doesn't pin the attachment view accurately to the tapped point. Instead, it often appears in space rather than on the entity itself. The issue might be due to an incorrect conversion of coordinates or values.
My code:
struct ImmersiveView: View {
@State private var location: GlobeLocation?
var body: some View {
RealityView { content, attachments in
guard let rootEnity = try? await Entity(named: "Scene", in: realityKitContentBundle) else { return }
content.add(rootEnity)
}update: { content, attachments in
if let earth = content.entities.first?.findEntity(named: "Earth"),let desView = attachments.entity(for: "1") {
let pinTransform = computeTransform(for: location ?? GlobeLocation(latitude: 0, longitude: 0))
earth.addChild(desView)
// desView.transform =
desView.setPosition(pinTransform, relativeTo: earth)
}
}
attachments: {
Attachment(id: "1") {
DescriptionView(location: location)
}
}
.gesture(DragGesture().targetedToAnyEntity().onChanged({ value in
value.entity.position = value.convert(value.location3D, from: .local, to: .scene)
}))
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ value in
}))
}
func lookUpLocation(at value: CGPoint) -> GlobeLocation? {
return GlobeLocation(latitude: value.x, longitude: value.y)
}
func computeTransform(for location: GlobeLocation) -> SIMD3<Float> {
// Constants for Earth's radius. Adjust this to match the scale of your 3D model.
let earthRadius: Float = 1.0
// Convert latitude and longitude from degrees to radians
let latitude = Float(location.latitude) * .pi / 180
let longitude = Float(location.longitude) * .pi / 180
// Calculate the position in Cartesian coordinates
let x = earthRadius * cos(latitude) * cos(longitude)
let y = earthRadius * sin(latitude)
let z = earthRadius * cos(latitude) * sin(longitude)
return position
}
}
struct GlobeLocation {
var latitude: Double
var longitude: Double
}
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Posts under RealityKit tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Background: This is a simple visionOS empty application. After the app launches, the user can enter an ImmersiveSpace by clicking a button. Another button loads a 33.9 MB USDZ model, and a final button exits the ImmersiveSpace.
Below is the memory usage scenario for this application:
After the app initializes, the memory usage is 56.8 MB.
After entering the empty ImmersiveSpace, the memory usage increases to 64.1 MB.
After loading a 33.9 MB USDZ model, the memory usage reaches 92.2 MB.
After exiting the ImmersiveSpace, the memory usage slightly decreases to 90.4 MB.
Question: While using a memory analysis tool, I noticed that the model's resources are not released after exiting the ImmersiveSpace. How should I address this issue?
struct EmptDemoApp: App {
@State private var appModel = AppModel()
var body: some Scene {
WindowGroup {
ContentView()
.environment(appModel)
}
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
.environment(appModel)
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
}
}
struct ContentView: View {
@Environment(AppModel.self) private var appVM
var body: some View {
HStack {
VStack {
ToggleImmersiveSpaceButton()
}
if appVM.immersiveSpaceState == .open {
Button {
Task {
if let url = Bundle.main.url(forResource: "Robot", withExtension: "usdz") {
if let model = try? await ModelEntity(contentsOf: url, withName: "Robot") {
model.setPosition(.init(x: .random(in: 0...1.0), y: .random(in: 1.0...1.6), z: -1), relativeTo: nil)
appVM.root?.add(model)
print("Robot: \(Unmanaged.passUnretained(model).toOpaque())")
}
}
}
} label: {
Text("Add A Robot")
}
}
}
.padding()
}
}
struct ImmersiveView: View {
@Environment(AppModel.self) private var appVM
var body: some View {
RealityView { content in
appVM.root = content
}
}
}
struct ToggleImmersiveSpaceButton: View {
@Environment(AppModel.self) private var appModel
@Environment(\.dismissImmersiveSpace) private var dismissImmersiveSpace
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
var body: some View {
Button {
Task { @MainActor in
switch appModel.immersiveSpaceState {
case .open:
appModel.immersiveSpaceState = .inTransition
appModel.root = nil
await dismissImmersiveSpace()
case .closed:
appModel.immersiveSpaceState = .inTransition
switch await openImmersiveSpace(id: appModel.immersiveSpaceID) {
case .opened:
break
case .userCancelled, .error:
fallthrough
@unknown default:
appModel.immersiveSpaceState = .closed
}
case .inTransition:
break
}
}
} label: {
Text(appModel.immersiveSpaceState == .open ? "Hide Immersive Space" : "Show Immersive Space")
}
.disabled(appModel.immersiveSpaceState == .inTransition)
.animation(.none, value: 0)
.fontWeight(.semibold)
}
}
I have three basic elements in this UI page: View, Alert, Toolbar. I put Toolbar and Alert along with the View, when I click a button on Toolbar, my alert window shows up. Below could be a simple version of my code:
@State private var showAlert = false
HStack {
// ...
}
.alert(Text("Quit the game?"), isPresented: $showAlert) {
MyAlertWindow()
} message: {
Text("Description text about this alert")
}
.toolbar {
ToolbarItem(placement: .bottomOrnament) {
MyToolBarButton(showAlert: $showAlert)
}
}
And in MyToolBarButton I just toggle the binded showAlert variable to try to open/close the alert window.
When running on either simulator or device, the bahavior is quite strange. Where when toggle MyToolBarButton the alert window takes like 2-3 seconds to show-up, and all the elements on the alert window is grayed out, behaving like the whole window is losing focus. I have to click the moving control bar below (by dragging gesture) to make the whole window back to focus.
And this is not the only issue, I also find MyToolBarButton cannot be pressed to close the alert window (even thogh when I try to click the button on my alert window it closes itself).
Oh btw I don't know if this may affect but I open the window with my immersive view opened (though I tested it won't affect anything here)
Any idea of what's going on here?
XCode 16.1 / visionOS 2 beta 6
Hi everyone,
I'm looking for a way to convert an FBX file to USDZ directly within my iOS app. I'm aware of Reality Converter and the Python USDZ converter tool, but I haven't been able to find any documentation on how to do this directly within the app (assuming the user can upload their own file). Any guidance on how to achieve this would be greatly appreciated.
I've heard about Model I/O and SceneKit, but I haven't found much information on using them for this purpose either.
Thanks!
In RealityView I have two entities that contain tracking components and collision components, which are used to follow the hand and detect collisions. In the Behaviors component of one of the entities, there is an instruction to execute action through onCollision. However, when I test, they cannot execute action after collisions. Why is this?
Hi I have 2 views and an Immersive space. 1st and 2nd views are display in a TabView I open my ImmersiveSpace from a button in the 1st view of the tab. Then When I go to 2nd TabView I want to show an attachment in my Immersive space. This attachment should be visible in Immersive space only as long as the user os on the 2nd view. This is what I have done so far
struct Second: View {
@StateObject var sharedImageData = SharedImageData()
var body: some View {
VStack {
// other code
} .onAppear() {
Task {
sharedImageData.shouldCameraButtonShouw = true
}
}
.onDisappear() {
Task {
sharedImageData.shouldCameraButtonShouw = false
}
}
}
}
This is my Immersive space
struct ImmersiveView: View {
@EnvironmentObject var sharedImageData: SharedImageData
var body: some View {
RealityView { content, attachments in
// some code
} update: { content, attachments in
guard let controlCenterAttachmentEntity =
attachments.entity(for: Attachments.controlCenter) else { return }
controlCenterentity.addChild(controlCenterAttachmentEntity)
content.add(controlCenterentity)
} attachments: {
if sharedImageData.shouldCameraButtonShouw {
Attachment(id: Attachments.controlCenter) {
ControlCenter()
}
}
}
}
}
And this is my Observable class
class SharedImageData: ObservableObject {
@Published var takenImage: UIImage? = nil
@Published var shouldCameraButtonShouw: Bool = false
}
My problem is, when I am on Second view my attachment never appears. Attachment appears without this if condition. But How can I achieve my goal?
func createEnvironmentResource(image:UIImage) -> EnvironmentResource? {
do {
let cube = try TextureResource(
cubeFromEquirectangular: image.cgImage!,
quality: .normal,
options: TextureResource.CreateOptions(semantic: .hdrColor)
)
let environment = try EnvironmentResource(
cube: cube,
options: EnvironmentResource.CreateOptions(
samplingQuality: .normal,
specularCubeDimension: cube.width/2
// compression: .astc(blockSize: .block4x4, quality: .high)
)
)
return environment
}catch{
print("error: \(error)")
}
return nil
}
When I put this code in the project, it can run normally on the visionOS 2.0 simulator. When it is run on the real machine, an error is reported at startup:
dyld[987]: Symbol not found: _$s10RealityKit19EnvironmentResourceC4cube7optionsAcA07TextureD0C_AC0A10FoundationE13CreateOptionsVtKcfC
Referenced from: <DEC8652C-109C-3B32-BE6B-FE634EC0D6D5> /private/var/containers/Bundle/Application/CD2FAAE0-415A-4534-9700-37D325DFA845/HomePreviewDEV.app/HomePreviewDEV.debug.dylib
Expected in: <403FB960-8688-34E4-824C-26E21A7F18BC> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
What is the reason and how to solve it ?
The sample code project RealityKit-Stereo-Rendering found at the article Rendering a windowed game in stereo here fails to compile with the following errors in Xcode 16 beta 6.
Compilation of the project for the WWDC 2024 session title Compose interactive 3D content in Reality Composer Pro fails.
After applying the fix mentioned here (https://developer.apple.com/forums/thread/762030?login=true), the project still won't compile.
Using Xcode 16 beta 7, I get these errors:
error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
I’m developing an app for Vision Pro and have encountered an issue related to the UI layout and model display. Here's a summary of the problem:
I created an anchor window to display text and models in the hand menu UI.
While testing on my Vision Pro, everything works as expected; the text and models do not overlap and appear correctly.
However, after pushing the changes to GitHub and having my client test it, the text and models are overlapping.
Details:
I’m using Reality Composer Pro to load models and set them in the hand menu UI.
All pins are attached to attachmentHandManu, and attachmentHandManu is set to track the hand and show the elements in the hand menu.
I ensure that the attachmentHandManu tracks the hand properly and displays the UI components correctly in my local tests.
Question:
What could be causing the text and models to overlap in the client’s environment but not in mine? Are there any specific settings or configurations I should verify to ensure consistent behavior across different environments? Additionally, what troubleshooting steps can I take to resolve this issue?
My App will dynamically load different immersive furniture design scenes.
After each scene is loaded, I need to set the HDR image as ImageBasedLight.
How can I load EnvironmentResource dynamically?
This way I can set the ImageBasedLightComponent dynamically
Starting with Xcode Beta 4+, any ModelEntity I load from usdz that contain a skeletal pose has no pins. The pins used to be accessible from a ModelEntity so you could use alignment with other pins.
Per the documentation, any ModelEntity with a skeletal pose should have pins that are automatically generated and contained on the entity.pins object itself.
https://developer.apple.com/documentation/RealityKit/Entity/pins
Is this a bug with the later Xcode betas or is the documentation wrong?
Hello,
Im not able to get any 3d object visible in ARView.
struct ARViewContainer: UIViewRepresentable {
var trackingState: ARCamera.TrackingState? = nil
func makeUIView(context: Context) -> ARView {
// Create the view.
let view = ARView(frame: .zero)
// Set the coordinator as the session delegate.
view.session.delegate = context.coordinator
let anchor = AnchorEntity(plane: .horizontal)
let box = ModelEntity(mesh: MeshResource.generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: true)])
box.generateCollisionShapes(recursive: true)
anchor.addChild(box)
view.scene.addAnchor(anchor)
// Return the view.
return view
}
final class Coordinator: NSObject, ARSessionDelegate {
var parent: ARViewContainer
init(_ parent: ARViewContainer) {
self.parent = parent
}
func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) {
print("Camera tracking state: \(camera.trackingState)")
parent.trackingState = camera.trackingState
}
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
View is loaded correctly but anything cant appear. I also tried to create 3D object in
func updateUIView(_ uiView: ARView, context: Context) {
let anchor = AnchorEntity(plane: .horizontal)
let box = ModelEntity(mesh: MeshResource.generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: true)])
box.generateCollisionShapes(recursive: true)
anchor.addChild(box)
uiView.scene.addAnchor(anchor)
print("Added into the view")
}
Print statement is printed but there is still no object in the ARView. Is it bug or what am I missing?
Is there any action that can clone the entity in RealityView to the number I want? If there is, please let me know. Thank you!
I created a simple Timeline animation with only a "Play Audio" action in RCP. Also a Behaviors Component setting an "OnTap" trigger to fire this Timeline animation.
In my code, I simply run Entity.applyTapForBehaviors() when something happened. The audio can be normally played on the simulator but cannot be played on the device.
Any potential bug leads this behavior?
Env below:
Simulator Version: visionOS 2.0 (22N5286g)
XCode Version: Version 16.0 beta 4 (16A5211f)
Device Version: visionOS 2.0 beta (latest)
Steps to Reproduce:
Create a SwiftUI view that initializes an ARKit session and a camera frame provider.
Attempt to run the ARKit session and retrieve camera frames.
Extract the intrinsics and extrinsics matrices from the camera frame’s sample data.
Attempt to project a 3D point from the world space onto the 2D screen using the retrieved camera parameters.
Encounter issues due to lack of detailed documentation on the correct usage and structure of the intrinsics and extrinsics matrices.
struct CodeLevelSupportView: View {
@State
private var vm = CodeLevelSupportViewModel()
var body: some View {
RealityView { realityViewContent in }
.onAppear {
vm.receiveCamera()
}
}
}
@MainActor
@Observable
class CodeLevelSupportViewModel {
let cameraSession = CameraFrameProvider()
let arSession = ARKitSession()
init() {
Task {
await arSession.requestAuthorization(for: [.cameraAccess])
}
}
func receiveCamera() {
Task {
do {
try await arSession.run([cameraSession])
guard let sequence = cameraSession.cameraFrameUpdates(for: .supportedVideoFormats(for: .main, cameraPositions: [.left])[0]) else {
print("failed to get cameraAccess authorization")
return
}
for try await frame in sequence {
guard let sample = frame.sample(for: .left) else {
print("failed to get camera sample")
return
}
let leftEyeScreenImage:CVPixelBuffer = sample.pixelBuffer
let leftEyeViewportWidth:Int = CVPixelBufferGetWidth(leftEyeScreenImage)
let leftEyeViewportHeight:Int = CVPixelBufferGetHeight(leftEyeScreenImage)
let intrinsics = sample.parameters.intrinsics
let extrinsics = sample.parameters.extrinsics
let oneMeterInFront:SIMD3<Float> = .init(x: 0, y: 0, z: -1)
projectWorldLocationToLeftEyeScreen(worldLocation: oneMeterInFront, intrinsics: intrinsics, extrinsics: extrinsics, viewportSize: (leftEyeViewportWidth,leftEyeViewportHeight))
}
} catch {
}
}
}
//After the function implementation is completed, it should return a CGPoint?, representing the point of this worldLocation in the LeftEyeViewport. If this worldLocation is not visible in the LeftEyeViewport (out of bounds), return nil.
func projectWorldLocationToLeftEyeScreen(worldLocation:SIMD3<Float>,intrinsics:simd_float3x3,extrinsics:simd_float4x4,viewportSize:(width:Int,height:Int)) {
//The API documentation does not provide the structure of intrinsics and extrinsics, making it hard to done this function.
}
}
I'm trying to load up a virtual skybox, different from the built-in default, for a simple macOS rendering of RealityKit content.
I was following the detail at https://developer.apple.com/documentation/realitykit/environmentresource, and created a folder called "light.skybox" with a single file in it ("prairie.hdr"), and then I'm trying to load that and set it as the environment on the arView when it's created:
let ar = ARView(frame: .zero)
do {
let resource = try EnvironmentResource.load(named: "prairie")
ar.environment.lighting.resource = resource
} catch {
print("Unable to load resource: \(error)")
}
The loading always fails when I launch the sample app, reporting "Unable to load resource ..." and when I look in the App bundle, the resource that's included there as Contents/Resources/light.realityenv is an entirely different size - appearing to be the default lighting.
I've tried making the folder "light.skybox" explicitly target the app bundle for inclusion, but I don't see it get embedded with it toggle that way, or just default.
Is there anything I need to do to get Xcode to process and include the lighting I'm providing?
(This is inspired from https://stackoverflow.com/questions/77332150/realitykit-how-to-disable-default-lighting-in-nonar-arview, which shows an example for UIKit)
The entity in My RealityView contains tracking components and allows them to track different places of the hand. However, I found that except for the fingertip of the index finger, the fingertip of the thumb, the palm and the wrist, all other positions cannot be tracked normally (such as the fingertip of the middle finger). How can I solve it (I think it may be a beta version of the bug)
In my Volume, there is a RealityView that includes lighting effects. However, when the user drags the position of the window back and forth, the farther the distance between the volume and the user, the greater the brightness of the light effect. ( I believe this may be a Beta version of a bug.)
Note: The volume windowGroup has the .defaultWorldScaling(.dynamic) property.
In the reality view, I found that the entity could not cast a shadow on the reality. What configuration should I add to achieve this function?