Why high-level RealityKit's APIs are only available on visionOS?
RealityView & Model3D only to name some.
On other platforms currently, the only way to deploy RealityKit & or ARKit, is by using either UIKit or UIKit's integration with SwiftUI (UIViewRepresentable).
Are these newer APIs coming to other platforms as well?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
So I have a RealityView with an Entity (from my bundle) being rendered in it like so:
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let entity = try? await Entity(named: "MyContent", in: realityKitContentBundle) {
content.add(entity)
}
}
}
}
Is it possible to programatically transform the entity? Specifically I want to (1) translate it horizontally in space, eg 1m to the right, and (2) rotate it 90°. I've been looking through the docs and haven't found the way to do this, but I fear I'm not too comfortable with Apple docs quite yet.
Thanks in advance!
i do not really know how this works but hi I am Philemon.
for a school assignment I need to program a app I have 2 years for this and it is for people that are interested in coding. I want to make a iOS app that can make 3d models from pictures (photogrammetry) and I know that there are already apps for this but I want to code this myself. I have a little bit of experience coding c# in unity but I really don't know where to start can someone help me? and I know that apple has reality kit but I want that people without a LiDAR Scanner can use this too.
so where do I start witch language do I need to learn?
every comment is welcome!!!
kind regards Philemon
My application uses ARKit to capture faces in real time, there are two occasional crashes during use, I can not reproduce it, the following is the crash stack, These are all system API calls. I have no clue, any suggestions to fix it? Thank you so much!
Additional information:
BUG IN CLIENT OF LIBPLATFORM: Trying to recursively lock an os_unfair_lock
the first kind:
EXC_BREAKPOINT 0x00000001f6d2d20c
0 libsystem_platform.dylib _os_unfair_lock_recursive_abort + 36
1 libsystem_platform.dylib _os_unfair_lock_lock_slow + 284
2 SceneKit C3DTransactionGetStack + 160
3 SceneKit _commitImplicitTransaction + 36
4 CoreFoundation CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 36
5 CoreFoundation __CFRunLoopDoObservers + 548
6 CoreFoundation __CFRunLoopRun + 1028
7 CoreFoundation CFRunLoopRunSpecific + 608
8 Foundation -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212
9 Foundation -[NSRunLoop(NSRunLoop) run] + 64
10 UIKitCore __66-[UIViewInProcessAnimationManager
startAdvancingAnimationManager:]_block_invoke_7 + 108
11 Foundation NSThread__start + 732
12 libsystem_pthread.dylib _pthread_start + 136
13 libsystem_pthread.dylib thread_start + 8
the second kind:
已崩溃:com.apple.arkit.ardisplaylink.0x28083bd80
EXC_BREAKPOINT 0x00000001fe43920c
0 libsystem_platform.dylib _os_unfair_lock_recursive_abort + 36
1 libsystem_platform.dylib _os_unfair_lock_lock_slow + 284
2 SceneKit C3DTransactionGetStack + 160
3 SceneKit _commitImplicitTransaction + 36
4 CoreFoundation CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 36
5 CoreFoundation __CFRunLoopDoObservers + 548
6 CoreFoundation __CFRunLoopRun + 1028
7 CoreFoundation CFRunLoopRunSpecific + 608
8 CoreFoundation CFRunLoopRun + 64
9 ARKitCore -[ARRunLoop _startThread] + 616
10 Foundation NSThread__start + 732
11 libsystem_pthread.dylib _pthread_start + 136
12 libsystem_pthread.dylib thread_start + 8
I want to have realtime image anchor tracking together with RoomPlan.
But it's frustrating to not seeing any thing that can support this.
Because it is useful to have interactive things in the scanned room.
Ideally it should be running the same time, but if not possible, how do you align the two tracking space if running RoomPlan and then ARKit image tracking? sounds like headache
Hello,
I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list.
If I run just the scene reconstruction provider by itself, then I do get mesh updates.
Is this a bug? Is it expected that it's not possible to do this?
Thank you
I am trying to create a simple custom shader with an image as material and a depth map as bump map information. I have followed the official procedure from "Explore materials in Reality Composer Pro" but the depth map is not processed.
What am I doing wrong?
(attached is a screenshot that shows the setup. I removed the image ref for clarity)
I want to set camera exposure to a lower value to attenuate motion blur, how to?
background: I'm making an app to capture video for gaussian splatting. less exposure, less blurry
I'm trying to import the USDZ file of a model with multiple textures attached to each part of the model. When I preview the file by double-clicking on the USDZ, it views fine.
However, when I import it into Reality Composer Pro, it only shows the pink striped model.
I also get the message - "Multiple root level objects exist for HU_EVO_SPY-8.usdc".
There are so many components of the model that binding each texture to each component will be very difficult to do manually.
How can I fix the file such that when I import to Reality Composer Pro, textures are attached to the model?
Hi,
I'm prototyping a visionOS app for which I'm trying to create the following behavior in mixed immersive space:
users pinch and drag to position a model entity in the real world starting from the ray-cast of the pinch, meaning that the initial position should be on a MeshAnchor from scene reconstruction (I got that working, even though it's less precise than I expected)
once the model entity is positioned, I want to anchor this to the world so that it will always stays there no matter what, from what I understand I need to create and add a WorldAnchor to a WorldTrackingProvider for that
after positioning the model entity, users should be able to pinch and drag the entity to change its position and have that be persisted from then onwards
It's not clear to me what the relationship between AnchorEntity(world:) and WorldAnchor is (looks like AnchorEntity(anchor:) isn't available in visionOS). What is the recommended way to keep these together?
Afterwards, what is the recommended way to covert coordinate spaces between repositioned scene coordinate space and the anchor entity hierarchy coordinate space? I tried a DragGesture on the model entity and convert the translation to the scene, that works only when the scene origin hasn't changed. After it has changed, the translation is using the wrong coordinate space.
Thanks for the help!
Geert
When trying to run my app with .windowStyle(.volumetric) for vision OS, this error is returning: Fatal error: Your app was given a scene with session role UISceneSessionRole(_rawValue: UIWindowSceneSessionRoleApplication) but no scenes declared in your App body match this scroll.
Hi, Please forgive me if i am asking a basic question. Because after my R&D I didn't see how can I build a solution where user can scan a QR code hanging on a specific wall at a specific fixed position. So when workers scan qr code from their iOS device they could see all the wirings, pipeline e.t.c. It would be really helpful If someone can let me know if its possible with ARKit and how.
I have an iPad App that works/available on visionOS store.
However, TestFlight releases are displaying that this in
iOS app only, and Incompatible on this Apple Vision Pro.
How do I enable my iPadOS app for TestFlight in vision OS
PS. Native visionOS can appear there,
I don't have any approved or released builds yet for visionOS.
I also see the same issue with "app not compatible" in TestFlight without visionOS section present. The same app is available in App Store in visionOS/iPad apps
Hi,
What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have.
Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network?
Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000?
Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots.
How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects?
Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
How to binding MTLTexture to Color input of the material?
I need use something similar to VideoMaterial.
So I need make a CustomMaterial.
But RealityKit CustomMaterial is not available in VisionOS, and replaced by ShaderGraphMaterial
So how to binding Metal resource such as MTLTexture to ShadeGraphMaterial directly.
When i call queryDeviceAnchor in my Billboard system I get transform updates but I'm unsure how to process them (similar to the Diorama sample app).
Is it a bug that I recieve these updates? The documentation says that ARKit data is only provided in a full space so I would expect this not to work at all.
But if this is the case, why am I getting deviceAnchor values in this situation?
Dear Apple Developer Forum Community,
I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a "Hello World" application using Xcode.
Upon launching Xcode and starting a new project, I followed the standard procedure for creating a simple iOS application. However, during the process, I encountered an unexpected error that halted my progress. The error message I received was [insert error message here].
I have attempted to troubleshoot the issue by see two images, but unfortunately, I have been unsuccessful in resolving it.
I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated.
Thank you very much for your time and assistance.
Sincerely,
Zipzy games
y
Games
Hi, I tried to change the default size for a volumetric window but It looks like this window has a maximum width value. Is it true?
WindowGroup(id: "id") {
ItemToShow()
}.windowStyle(.volumetric)
.defaultSize(width: 100, height: 0.8, depth: 0.3, in: .meters)
Here I set the width to 100 meters but It still looks like about 2 meters
I have a main app window that presents an Immersive style in Mixed Reality. I am trying to determine the anchor/position of this glass window in the 3D space and place a Sphere entity right next to it. The goal is to ensure that if the user moves the window, the Sphere entity remains attached to it. Does anyone have insights on how to achieve this?
The below code snippet provides the position of the device, and I have positioned it 0.5 meters away from the z-axis. However, my objective is to obtain the position of the glass window and anchor the sphere to it. Any guidance on achieving this would be appreciated.
import RealityKit
import RealityKitContent
import ARKit
struct ImmersiveView: View {
let visionProPose = VisionProPose()
var body: some View {
RealityView { content in
Task { await visionProPose.runArSession() }
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
if let scene = content.entities.first {
if let sphere = scene.findEntity(named: "Sphere") as? ModelEntity {
Task {
let transfrom = await visionProPose.getTransform()
sphere.position = [Float((transfrom?.columns.3.x)!),
Float((transfrom?.columns.3.y)!),
Float((transfrom?.columns.3.z)!) - 1 ]
}
}
}
}
}
}
@Observable class VisionProPose {
let session = ARKitSession()
let worldTracking = WorldTrackingProvider()
func runArSession() async {
Task {
try? await session.run([worldTracking])
}
}
func getTransform() async -> simd_float4x4? {
guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: 1)
else { return nil }
let transform = deviceAnchor.originFromAnchorTransform
return transform
}
}
Aloha,
I'm wondering where the documentation for the Vision Pro "Developer Strap" is located. I have the Vision Pro devices and the developer straps, but I'm not sure how to go about using the developer straps for VisionOS development in Xcode & Unity.