I’m encountering an issue with recording in my Unity game through Reality Composer Pro. When I attempt to record video or take screenshots, it results in a black screen once my game launches. Screenshots and videos outside my game record fine, but within the game, the recordings are just black.
Additionally, when using my headset, the display is distorted and only my right eye shows anything, while the left eye remains black.
Here are some specifics:
My game is developed in Unity.
I’m using all the betas: Xcode 16 beta, the new macOS beta, and VisionOS 2 beta.
In the attached screenshot, you can see an Apple UI overlay with a black screen behind it. However, when I’m in the headset, I actually see my game along with that UI overlay, so it seems like the game itself isn’t getting recorded.
Also, I noticed on the Apple webpage that they recommend using the Developer Capture feature in Reality Composer Pro for high-quality screenshots and app previews. However, I find that using Control Center for recording works pretty well despite the lower quality and foveated resolution. If I can’t get Reality Composer Pro to capture in 4K, is it still acceptable to use screenshots and record videos from the Control Center?
Has anyone encountered similar issues or have any insights on what might be causing this? And regarding the secondary question, I’d appreciate any guidance from Apple on the acceptability of using Control Center recordings as a fallback. Here's a video preview I made with Control Center recordings. Is this quality acceptable?
https://youtu.be/z4VIO7obNNg?si=2irqHEfeGjkNBUvb
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Post
Replies
Boosts
Views
Activity
I have a plane with a texture that was made in Blender & then exported using the Reality Converter from a USDC to a USDZ.
The translucency looks 100% translucent in RCP but it looks a bit like glass with some reflection in a Reality Kit scene in visionOS.
Is there a material setting that I need to change?
VisionOS 2 beta 5 ,unity textmesh shader errors
I wanted to report an issue I've encountered with the latest Beta 6 update concerning the immersive space feature. Before this update, when I was in immersive space and clicked on a window button to play a video using AVPlayer, I had the option to keep other windows open and accessible within the environment. Could you please investigate this issue? It would be helpful to know if this is an intentional change or if there might be a bug affecting window management in immersive space.
Thank you for your attention to this matter. I look forward to your response.
Hi. I display buildings in mixed immersive view. Right now the building appears in relation to the person when the view is opened. (world anchor)
To position the building precisely, I want to use object tracking.
Set up a project following the wwdc object tracking session. That works well sort of...
With an object anchor, the 3D object related to the anchor disappears as soon as the Tracked object is out of view, and with the big objects you don't get the chance to look around.
I figure I need to give my 3D object a world anchor, and only have that world anchor update if a change in the object anchor is detected.
how do I do that?
Preferable using the tools in Reality Composer pro (or very well explained, as I am new to code)
Hi everyone,
I'm new to Swift and VisionOS development in general, so please go easy on me.
Currently, I'm looking at a sample project from a WWDC23 session that uses RealityKit and ARKit to add a cube entity to a scene via tap gesture. The link to the sample project is here.
Instead of adding a cube, I changed the code to adding a usdz model instead. Here is my code:
func add3DModel(tapLocation: SIMD3<Float>) {
let placementLocation = tapLocation + SIMD3<Float>(0, 0.1, 0)
guard let entity = try? Entity.load(named: "cake-usdz", in: realityKitContentBundle)
else { logger.error("failed to load 3D model")
return }
// calculate the collision box (the boundaries)
let entitySize = entity.visualBounds(relativeTo: nil)
let width = entitySize.max.x - entitySize.min.x
let height = entitySize.max.y - entitySize.min.y
let depth = entitySize.max.z - entitySize.min.z
// logger.debug("width: \(width), height: \(height), depth: \(depth)")
// set collision shape
let collisionShape = ShapeResource.generateBox(size: SIMD3<Float>(width, height, depth))
entity.components.set(CollisionComponent(shapes: [collisionShape]))
// set the position and input types to indirect
entity.setPosition(placementLocation, relativeTo: nil)
entity.components.set(InputTargetComponent(allowedInputTypes: .indirect))
let material = PhysicsMaterialResource.generate(friction: 0.8, restitution: 0.0)
entity.components.set(PhysicsBodyComponent(
shapes: [collisionShape],
mass: 1.0,
material: material,
mode: .dynamic
))
contentEntity.addChild(entity)
}
This works fine so far.
But when I tried to add a Drag Gesture to drag the added entity around. There are weird glitches happening with the model. The model jumped up and down, and even rotating around it self sometimes.
Below is my code for Drag Gesture. I placed it directly below the code for Spatial Tap Gesture in the sample project.
.gesture(DragGesture().targetedToAnyEntity().onChanged({ value in
let targetedEntity = value.entity
targetedEntity.position = value.convert(value.location3D, from: .local, to: .scene)
}))
At first, I thought my code was wrong. But after looking around and removing the PhysicsBodyComponent for the added model, the entity was moving as intended while dragging.
I can't figure out a solution to this. Could anyone help me?
I'm currently on Xcode 16 beta 2, and visionOS 2.0. Because I'm on Beta, I'm unsure if this is a bug or if I just missed something.
Thank you.
I am attempting to execute actions after clicking an entity in Reality View using the Behaviors component. I have added the Input Target component and the Tap gesture as follows:
TapGesture().targetedToAnyEntity()
.onEnded({ value in
_ = value.entity.applyTapForBehaviors()
})
)
However, during testing, I have observed that the entity does not appear to recognize the click gesture. Could you kindly provide any relevant documentation or guidance on this matter?
In the reality view, I found that the entity could not cast a shadow on the reality. What configuration should I add to achieve this function?
The entity in My RealityView contains tracking components and allows them to track different places of the hand. However, I found that except for the fingertip of the index finger, the fingertip of the thumb, the palm and the wrist, all other positions cannot be tracked normally (such as the fingertip of the middle finger). How can I solve it (I think it may be a beta version of the bug)
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object).
Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code).
Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now.
I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision
So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
I can execute an action by allowing Xcode to send a notification to Reality Composer Pro via NotificationCenter, or I can send notifications to Xcode through the Notification Action in Reality Composer Pro. However, I discovered that they were unable to accept notifications from both parties within my project. To ascertain whether there was an error in my code, I created a simple Demo project. I utilized the same code and determined that it functioned normally within the Demo project. It is perplexing that I am unable to resolve this issue. Do I require additional modifications?
When using the sample application of object tracking with apple Vision Pro, the left eye display shows the same position of the original object's target object, but the right eye is misaligned. Is there any way to resolve this issue?
I created a simple Timeline animation with only a "Play Audio" action in RCP. Also a Behaviors Component setting an "OnTap" trigger to fire this Timeline animation.
In my code, I simply run Entity.applyTapForBehaviors() when something happened. The audio can be normally played on the simulator but cannot be played on the device.
Any potential bug leads this behavior?
Env below:
Simulator Version: visionOS 2.0 (22N5286g)
XCode Version: Version 16.0 beta 4 (16A5211f)
Device Version: visionOS 2.0 beta (latest)
Is there any action that can clone the entity in RealityView to the number I want? If there is, please let me know. Thank you!
Hello Dev team,
3 weeks I'm looking for how I can export a static Cinema 4D objects WITH TEXTURES to Reality Composer Pro !
I can export it directly on USDA format and it works well for the 3D model in Reality Composer Pro, BUT, I can't have the textures on my model. My model is simple not colored !
Of course I expect to have textures applied on the good place and same appearance I've in Cinema 4D.
Could you give me a process to do that please ?
I'm using Cinema 4D R25 and Last XCode and Reality Composer Pro beta versions.
Big big thanks to the one could help me on this. It will unblock many things to me!!!!
Cheers
Mathis
My App will dynamically load different immersive furniture design scenes.
After each scene is loaded, I need to set the HDR image as ImageBasedLight.
How can I load EnvironmentResource dynamically?
This way I can set the ImageBasedLightComponent dynamically
Hello everyone,
I am a developer working on the Apple Vision Pro platform, currently developing an application that relies heavily on the Vision Pro LiDAR sensor. To ensure the accuracy and performance of my application, I would like to gather more detailed information about the technical specifications of the LiDAR sensor, particularly in the following areas:
1. Distance Accuracy: How accurate is the LiDAR sensor at different distances?
2. Spatial Resolution: What is the smallest object size that the sensor can detect?
3. Environmental Impact: How does the performance of the LiDAR sensor vary under different lighting conditions or environmental factors (e.g., reflective surfaces, fog)?
I would greatly appreciate any detailed information or technical documentation regarding these questions. If there are any developers or Apple staff members who have insights on this, your input would be highly valued.
Thank you in advance for your assistance!
Compilation of the project for the WWDC 2024 session title Compose interactive 3D content in Reality Composer Pro fails.
After applying the fix mentioned here (https://developer.apple.com/forums/thread/762030?login=true), the project still won't compile.
Using Xcode 16 beta 7, I get these errors:
error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
func createEnvironmentResource(image:UIImage) -> EnvironmentResource? {
do {
let cube = try TextureResource(
cubeFromEquirectangular: image.cgImage!,
quality: .normal,
options: TextureResource.CreateOptions(semantic: .hdrColor)
)
let environment = try EnvironmentResource(
cube: cube,
options: EnvironmentResource.CreateOptions(
samplingQuality: .normal,
specularCubeDimension: cube.width/2
// compression: .astc(blockSize: .block4x4, quality: .high)
)
)
return environment
}catch{
print("error: \(error)")
}
return nil
}
When I put this code in the project, it can run normally on the visionOS 2.0 simulator. When it is run on the real machine, an error is reported at startup:
dyld[987]: Symbol not found: _$s10RealityKit19EnvironmentResourceC4cube7optionsAcA07TextureD0C_AC0A10FoundationE13CreateOptionsVtKcfC
Referenced from: <DEC8652C-109C-3B32-BE6B-FE634EC0D6D5> /private/var/containers/Bundle/Application/CD2FAAE0-415A-4534-9700-37D325DFA845/HomePreviewDEV.app/HomePreviewDEV.debug.dylib
Expected in: <403FB960-8688-34E4-824C-26E21A7F18BC> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
What is the reason and how to solve it ?
Dear Apple Developer Forums,
I am just starting out developing in Swift, using RealityKit and Reality Composer Pro, as a project I'm working on is transitioning from using Unity to native only. I am trying to attach a particle system to the user's right hand, emitting from a single point, showing a 'spatial trail' of sorts, basically acting as a visualizer of the hand's spatial history. However, in Reality Composer Pro, when I anchor my particle emitter's parent entity using an Anchor component, even though the "Particles Inherit Transform" option is unticked (false), all of the spawned particles will also be anchored to the specified anchor position, as opposed to the expected behavior, which is that the emitter itself is anchored, but the spawned particles retain their spawn position in worldspace. Am I missing something, or does anchoring simply behave this way in relation to particle systems?
Thank you!
RCP 1.0, Xcode 15.4, visionOS 1.2