I am using RealityKit's ObjectCaptureSession API to capture objects, presenting the process with ObjectCaptureView. During the object capture session, there is default background audio that plays automatically.
I noticed this same audio behavior in Apple's official Composer app, which seems to use the same API. I'd like to disable this audio in my app, but I have not been able to find any API or configuration option to do so.
However, the audio persists, and I cannot find a way to turn it off. Is there an official method or workaround to disable this default audio in the ObjectCaptureSession API?
Any guidance would be appreciated. Thank you!
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Post
Replies
Boosts
Views
Activity
sample repo: https://github.com/ckse93/VideoDiffusionIssueSHowcase
Repo has detailed step by step workflow. as well as screenshot, python script compute result, and parameters
after running computeDiffuseReflectionUVs.py and mapping textures and reflection diffuse to objects, I noticed that reflection diffuse does not produce any color.
expected result is shown below, diffused light has color
[xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
[xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
Tool exited with code 1
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
Hi
I'm trying to create a water shader using the shader graph in Reality Composer Pro, but quite a few of the features you would need for realistic water rendering appear to be missing.
One big issue is the lack of a way to create refraction. We can easily control the transparency of the water by changing the opacity, but how can we distort what we see through the water? I can't find any obvious solution for that.
In Unity, they provide a node called HD Scene Color which is basically the scene rendered to an offscreen buffer which you can apply to the water and then distort to get a refraction effect. I guess the Background Blur node could be used for something like this if we could turn off the blur and distort it, but there's no control for the blur and no control for the texture coordinates.
Am I missing something? Any ideas are welcome :)
I’m currently using the RealityKit/ObjectCaptureSession API to develop my app, and I’ve noticed that Apple’s official Reality Composer app also uses the same API. However, both my app and the Reality Composer app crash if the device doesn’t have enough storage space (approximately 4 GB free). Here is the debug log I’m seeing:
Insufficient storage: required 4000000000
Switch to error state. Got error = insufficientStorage(requiredBytes: 4000000000)
fromState == toState so punting transition! from=disabled toState=disabled
Punting transition since states match: disabled
Got error starting session! insufficientStorage(requiredBytes: 4000000000)
I would like to request:
A fix for the crash in the official Reality Composer app.
Guidance on how to properly handle this crash or error when using the ObjectCaptureSession API in my own app.
Thank you!
I want to render a 3d/stereoscopic video in an Apple Vision Pro window using RealityKit/RealityView. The video is a left-right stereo. The straight forward approach would be to spawn a quad, and give it a custom Shader Graph material, which has a CameraIndexSwitch. The CameraIndexSwitch chooses between the right texture vs the left texture.
https://i.sstatic.net/XawqjNcg.png
The issue I have here is that I have to extract the video frames from my AVSampleBufferVideoRenderer. This should work ok, but not if I'm playing FairPlay content.
So, my question is, how to render stereo FairPlay videos in a SwiftUI RealityView?
Does anyone have any idea if Apple plans to add in UDIM support for its 3D development? It is a real bummer to not have this feature and makes an otherwise clean USD pipeline kinda suck.
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this?
Actual result (chart labels added in photoshop):
Expected:
Red > 0.1 Shader Graph
Reality Composer Pro question related to custom components
My custom component defines some properties to edit in RCP. Simple ones work find, but SIMD3 and SIMD2 do not. I'd expect to see default values but instead I get this 0s. If I try to run this the scene doesn't load. Once I enter some values it does and build and run again it works fine.
More generally, does Apple have documentation on creating properties for components? The only examples I've seen show simple strings and floats. There are no details about vectors, conditional options, grouping properties, etc.
public struct EntitySpawnerComponent: Component, Codable {
public enum SpawnShape: String, Codable {
case domeUpper
case domeLower
case sphere
case box
case plane
case circle
}
// These prooerties get their default values in RCP
/// The number of clones to create
public var Copies: Int = 12
/// The shape to spawn entities in
public var SpawnShape: SpawnShape = .domeUpper
/// Radius for spherical shapes (dome, sphere, circle)
public var Radius: Float = 5.0
// These properties DO NOT get their default values in RCP. The all show 0
/// Dimensions for box spawning (width, height, depth)
public var BoxDimensions: SIMD3<Float> = SIMD3(2.0, 2.0, 2.0)
/// Dimensions for plane spawning (width, depth)
public var PlaneDimensions: SIMD2<Float> = SIMD2(2.0, 2.0)
/// Track if we've already spawned copies
public var HasSpawned: Bool = false
public init() {
}
}
Hi everyone,
I’m working on a project in Reality Composer Pro, and I’ve encountered an issue with vertex animations. Here’s what I’ve done:
I created a model in Blender, which includes two animations:
One that scales the vertices of the model.
Another that moves the model's position.
I imported both animations into Reality Composer Pro, and the position animation works fine, but the vertex scaling animation does not seem to work.
What I’m Trying to Achieve:
I want the vertex scaling animation to play correctly in Reality Composer Pro alongside the movement animation.
Problem:
The position animation works as expected, but the vertex scaling animation does not work when applied in Reality Composer Pro.
I have checked the vertex scaling animation in other software, and it works fine there. The issue seems to be specific to Reality Composer Pro.
Is vertex animation scaling supported in Reality Composer Pro? If so, what might be causing this issue? Any advice or solutions would be greatly appreciated!
Hello!
https://forums.developer.apple.com/forums/thread/762763
I read this thread, and this is similar to what I'm trying to do.
I have two entities in the scene, "HandTrackingEntity", "HandScanner".
"HandTrackingEntity": I put Anchoring Component, Collision Component (Trigger) here.
"HandScanner": I put Behaviors Component(OnCollision), and Collision Component here.
Here is the pictures how I set the components.
and I set physicsSimulation property to .none.
I was expecting that Timeline will be played when I put my hand(with HandTrackingEntity) on "HandScanner" entity. But it didn't work.
Am I missing some steps? And I need sample codes to understand how to apply 'physicsSimulation' property. I'd appreciate it if you could let me know about it.
Hi!
I wanna ask that if it's possible to make mirror material with Shader Graph in Reality Composer Pro. The mirror should reflect entities in Reality Composer Pro scene.
I found that it works with SceneKit, but I'm using RealityKitContent in my project. Are there ways to solve this?
Hi,
In the downloadable WWDC sample project "CreatingASpaceshipGame" there is an audio file named "WorkMusic.aiff", as well mentioned in the video. Info says it's PCM 4-channel Quadrophonic.
Where can I find further information on how this file was authored? Was it simply exported from Logic Pro with Quadrophonic Surround settings or did it have any other specific treatment?
Thanks,
Axel
Hi, every one!
I'm trying to bind timeline(animation + audio) and behaviors on an entity in reality composer pro.
In xcode, I need to clone this entity and use the behavior, but I found that the behaviors are not clone(send notification but not received by reality composer pro and not execute the timeline).
How can I solve this problem? Thanks!
I am currently developing a game that runs on VisionOS using RealityKit and Swift.
I have a question regarding particle emitters.
It seems that there is a sorting order (render queue) between particle emitters themselves, but there doesn’t appear to be a render queue between particle emitters and regular model entities.
If such a feature exists, could you please provide a simple example?
Thank you!
Hello! Currently watching the Envision the Future: Build great apps for visionOS" webinar, and lots of questions coming up. Thx for offering this online!
For those of us with "VR legs", how can we go about setting up custom hand/finger gestures that would enable us to add the functionality for teleporting and navigating within our fully Immersive environments? Both smooth, and snap turn/teleport options would be great, thx! This is adjacent to my previous question on how to setup a PS5 controller to do something similar. Think Half-Life: ALYX as the gold standard for VR navigation.
I recently completed a freelance project where I was tasked with creating room-scale environments that could be used as AR elements. As a bonus, I suggested that these could be done to scale, and repurposed for eventual viewing in Vision Pro. To illustrate, I was able to quickly create a simple Immersive project in Xcode, add the USDZ file (authored in Maya, with baked lighting from Arnold) to Reality Composer Pro, and compile for quick sending to headset. I then would do screen recordings inside the immersive space, which the client loved to see. However, I am unable to walk around due to the boundary limitations.
My next obvious thought is, how can I setup the “player” camera so that I can control with a PS5 controller inside AVP? In addition to Maya, I’m an Unreal Engine artist, and have been waiting patiently to get any projects compiled for AVP. With 5.5 release, I was able to get a VR Template test over to AVP, where I have rudimentary navigation control via the PS5 controller.
Ideally, I’d also love to learn how to set this up natively, so I can take simple USDZ scenes created in Maya, import to RCP, setup a simple camera controller, and then be able to use this to navigate my VR Immersive spaces on Vision Pro. How can we go about doing this?
Part two of this question/suggestion is, how would I go about controlling a rigged, animated character in AR/passthrough mode in a similar fashion? Thx!
What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.