How to set the scale unit of an Entity in Reality Composer Pro, for example, if the scale value is 1 meter, then when this Entity is placed in RealityView, the displayed size will be 1 meter
If the unit of scale cannot be set in Reality Composer Pro, is there a way to specify the unit of scale in the code so that the Entity can be displayed in meters when added to RealityView
Thank you
Post
Replies
Boosts
Views
Activity
In visionOS. mix mode, I place a virtual object on the floor and a chair in front of it, but the chair does not obstruct the virtual object, making the effect unrealistic. How to make chairs and other objects in reality cover virtual objects
WindowGroup{
SolarDisplayView()
.environment(model)
}
.windowStyle(.plain)
Why is the code above correct while the code below reports an error? How to modify the following code?
WindowGroup{
SolarDisplayView()
.environment(model)
}
.windowStyle(model.isShow ? .plain : .automatic)
When developing a VisionPro application, I need to first move and then rotate the Entity in RealityView.
How can these two animations be executed sequentially? (I tested it and executing it simultaneously would result in incorrect animation positions)
I used other software to export usdz files, hoping to further adjust the PBR and other parameters in the model in Reality Composer Pro. Because usdz is a whole, I cannot use the mouse to select a specific model in usdz on the interface. I have to find the models I want to modify one by one in the list on the left.
This method of operation is too inefficient. Is there a better way?
Or is there a way to disassemble the usdz file into numerous sub-models and texture material files, so that I can select it with the mouse on the interface in Reality Composer Pro and then modify the PBR, which would be much more efficient.
I loaded usdz of a room model. After putting it into RealityView, the entire model surrounded me. Even if there was a SwiftUI View in front of me, I couldn't interact with it with my fingers. How do I set it up so that SwiftUI responds to my finger tap gesture first?
I want to use 3dmax software to generate two panoramic renderings, one for the left eye and the other for the right eye, so that I can get a realistic sense of space.
At the technical implementation level, are there relevant APIs that can control the left and right eyes to see different content?
struct GameSystem: System {
static let rootQuery = EntityQuery(where: .has(GameMoveComponent.self) )
init(scene: RealityKit.Scene) { }
func update(context: SceneUpdateContext) {
let root = context.scene.performQuery(Self.rootQuery)
for entity in root{
let game = entity.components[GameMoveComponent.self]!
if let xMove = game.game.gc?.extendedGamepad?.dpad.xAxis.value ,
let yMove = game.game.gc?.extendedGamepad?.dpad.yAxis.value {
print("x:\(xMove),y:\(yMove)")
let x = entity.transform.translation.x + xMove * 0.01
let y = entity.transform.translation.z - yMove * 0.01
entity.transform.translation = [x , entity.transform.translation.y , y]
}
}
}
}
I want to use the game controller's direction keys to control the continuous movement of Entity in visionOS. When I added a query for handle button presses in the ECS System, I found that the update interface was not called at a frequency of 30 frames per second. Instead, it executes once when I press or release the key.
Is this what is the reason?
I want to keep moving by holding down the controller button, is there a better solution? I hope this moving process will be smooth and not stuck.
WindowGroup(id: "Volumetic") {
GeometryReader3D { geometry in
VolumeView()
.environment(appState)
.volumeBaseplateVisibility(.visible) // 是否显示托盘,默认 .visible
.scaleEffect(geometry.size.width / initialVolumeSize.width)
}
}
.windowStyle(.volumetric)
.windowResizability(.contentSize)
.defaultSize(initialVolumeSize)
I can move it through the drag bar that comes with the UI, and change the size by dragging the edge of the plate. I want to use code to achieve the same effect, how to achieve it
My App will dynamically load different immersive furniture design scenes.
After each scene is loaded, I need to set the HDR image as ImageBasedLight.
How can I load EnvironmentResource dynamically?
This way I can set the ImageBasedLightComponent dynamically
func createEnvironmentResource(image:UIImage) -> EnvironmentResource? {
do {
let cube = try TextureResource(
cubeFromEquirectangular: image.cgImage!,
quality: .normal,
options: TextureResource.CreateOptions(semantic: .hdrColor)
)
let environment = try EnvironmentResource(
cube: cube,
options: EnvironmentResource.CreateOptions(
samplingQuality: .normal,
specularCubeDimension: cube.width/2
// compression: .astc(blockSize: .block4x4, quality: .high)
)
)
return environment
}catch{
print("error: \(error)")
}
return nil
}
When I put this code in the project, it can run normally on the visionOS 2.0 simulator. When it is run on the real machine, an error is reported at startup:
dyld[987]: Symbol not found: _$s10RealityKit19EnvironmentResourceC4cube7optionsAcA07TextureD0C_AC0A10FoundationE13CreateOptionsVtKcfC
Referenced from: <DEC8652C-109C-3B32-BE6B-FE634EC0D6D5> /private/var/containers/Bundle/Application/CD2FAAE0-415A-4534-9700-37D325DFA845/HomePreviewDEV.app/HomePreviewDEV.debug.dylib
Expected in: <403FB960-8688-34E4-824C-26E21A7F18BC> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
What is the reason and how to solve it ?
When I load some usdz file , it crash 100%, Why ?
It crash in simulate , but not crash in Vision Pro
-[MTLDebugDevice newBufferWithBytes:length:options:]:723: failed assertion `Buffer Validation
newBufferWith*:length 0x100fff80 must not exceed 256 MB.
The 3D furniture model I built uses some smooth specular reflection materials. I hope to only reflect the HDR image of the ImageBasedLight component I set myself, without reflecting the light source of the AR real environment. How to achieve this in the following scenario?
How to avoid being affected by the light source of the AR real environment when using PBR materials
When using Shader Graph, how can EnvironmentRadiance not be affected by the light source of the AR real environment?
In visionOS, I want to make a watch. After the actual production, the display of the hand makes it impossible to see the watch due to the virtual watch.
How to set up the watch to give priority to display?
In RealityKit, I know that an HDR image is pre-calculated, and through the settings of the ImageBasedLight Component, a specified specular object can reflect the content of the HDR image.
If a mirror object is originally very large, such as a large-area continuous glass door, after specifying an IBL image for these glass doors, the image reflected by the mirror will be obviously deformed when it moves in space. Because IBL is a picture of the surrounding environment at a point, while the glass door is a surface.
Is there a truly real-time specular reflection calculation setup in RealityKit that can reflect the model on the opposite side of the glass door?