I'm trying to control the LOD of textures for an app for vision pro, With the default image node in composer pro the UV's are correct but the LOD is not what I want, I would like to have control over it. I see there is a node called "RealityKitTexture2DLOD" but as soon as I try to use that one the UV's are all messed up. Am I missing something ? Do we need to do something specific to use this node ?
I tried to use the nodes "Place 2D" and "UsdTransform2d" but could not get the texture to align
Any help appreciated
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Context
https://developer.apple.com/forums/thread/751036
I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack
At runtime I'm loading:
Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project)
A Model3D view that pulls in the robot model from the web url
A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView
Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason.
Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot).
Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once?
Screenshot
Code
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add an ImageBasedLight for the immersive content
guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return }
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
// Put skybox here. See example in World project available at
// https://developer.apple.com/
}
}
Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!)
SkyboxView()
// RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz")
RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")
}
}
How to set the scale unit of an Entity in Reality Composer Pro, for example, if the scale value is 1 meter, then when this Entity is placed in RealityView, the displayed size will be 1 meter
If the unit of scale cannot be set in Reality Composer Pro, is there a way to specify the unit of scale in the code so that the Entity can be displayed in meters when added to RealityView
Thank you
Hello, I am doing to load model from bundle and it is loaded successfully. Now I am scaling model using GestureExtension from apple demo code. (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures?changes=_8)
@State private var selectedEntityName : String = ""
@State private var modelEntity: ModelEntity?
var body: some View {
contentView
.task {
do {
modelEntity = try await ModelEntity.loadArcadeMachine()
} catch {
fatalError(error.localizedDescription)
}
}
}
@ViewBuilder
private var contentView: some View {
if let modelEntity {
RealityView { content, attachments in
modelEntity.position = SIMD3<Float>(x: 0, y: -0.3, z: -5)
print(modelEntity.transform.scale)
modelEntity.transform.scale = [0.006, 0.006, 0.006]
content.add(modelEntity)
if let percentTextAttachment = attachments.entity(for: "percentage") {
percentTextAttachment.position = [0, 50, 0]
modelEntity.addChild(percentTextAttachment)
}
} update: { content, attachments in
// I want here to get updated scaling value and it is showing in RealityView attachmnt text.
} attachments: {
Attachment(id: "percentage") {
Text("\(modelEntity.name) \(modelEntity.scale * 100) %")
.font(.system(size: 5000))
.background(.red)
}
}
// This method am using for gesture support
.installGestures()
} else {
ProgressView()
}
}
}
Below code from GestureExtension
let state = EntityGestureState.shared
guard canScale, !state.isDragging else { return }
let entity = value.entity
if !state.isScaling {
state.isScaling = true
state.startScale = entity.scale
}
let magnification = Float(value.magnification)
entity.scale = state.startScale * magnification
state.magnifyValue = magnification
magnifyScale = Double(magnification)
print("Entity Name ::::::: \(entity.name)")
print("Scale ::::::: \(entity.scale)")
print("Magnification ::::::: \(magnification)")
print("StartScale ::::::: \(state.startScale)")
> This "magnification" value I need to use in RealityView class. How can i Do it? Could you please guide it.
}
extension Entity {
func addPanoramicImage(for media: WRMedia) {
let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink(
receiveCompletion: { switch $0 {
case .finished: break
case .failure(let error): assertionFailure("(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material] ))
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription))
}
problem:
case .failure(let error): assertionFailure("(error)")
Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory.
I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds.
I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel.
But, I'm kinda stuck here.
I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds.
If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
It seems that Vision OS doesn't yet support Blend Shapes, so I've created a character with body animations and skeletal animations for the mouth with vowels, all animations are in USDZ format in Reality Composer Pro.
I would like to know if there is any way to play a body animation of the character and simultaneously multiple mouth animations one after the other without stopping the body animation.
When I trigger the mouth animations, the body animation pauses, and when the mouth animations finish, the body animation resumes when I use blendLayerOffset 1 on the mouth animations. However, this is not what I want. I would like the body animation to continue while the mouth animations play simultaneously.
Thank you!
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene?
Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems.
In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood.
In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered.
This is a blocking problem for the development of this demo.
I want to set collection in curve view with fix paging in vision pro, How can i do?
I wanted to show a progress of a certain part of the game using an entity that looks like a "pie chart", basically cylinder with a cut-out. And as progress is changed (0-100) the entity would be fuller.
Is there a way to create this kind of model entity? I know there are ways to animated entities, warp them between meshes, but I was wondering if somebody knows how to achieve it in a simplest way possible?
Maybe some kind of custom shader that would just change how the material is rendered? I do not need its physics body, just to show it.
I know how to do it in UIKit and classic 2d UI Apple frameworks but here working with model entities it gets a bit tricky for me.
Here is example of how it would look, examples are in 2d but you can imagine it being 3d cylinders with a cut-out.
Thank you!
Through testing, I have been able to get 5.1 and 7.1 Dolby Atmos files created in Logic Pro to work in Reality Composer Pro and then in Vision Pro.
However, 5.1.4 and 7.1.4 files crash when added. Can someone confirm that these are not supported?
I am trying to implement a way to rotate a 3D model around its y axis, but this doesn't seem to work. What am I missing?
The scene only contains one model entity.
@State private var rotateBy:Double = 0.0
RealityView { content in
do {
let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle)
entity.scale = SIMD3<Float>(repeating: 0.6)
content.add(entity)
} catch {
ProgressView()
}
}
.gesture(
DragGesture(minimumDistance: 0.0)
.targetedToAnyEntity()
.onChanged { value in
let location3d = value.convert(value.location3D, from: .local, to: .scene)
let startLocation = value.convert(value.startLocation3D, from: .local, to: .scene)
let delta = location3d - startLocation
rotateBy = Double(atan(delta.x * 200))
}
)
Hello. I have a model of a CD record and box, and I would like to change the artwork of it via an external image URL. My 3D knowledge is limited, but what I can say is that the RealityView contains the USDZ of the record, which in turn contains multiple materials: ArtBack, ArtFront, PlasticBox, CD.
How do I target an artwork material and change it to another image? Here is the code so far.
RealityView { content in
do {
let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle)
entity.scale = SIMD3<Float>(repeating: 0.6)
content.add(entity)
} catch {
ProgressView()
}
}
I am trying to make a shader for a disco ball lighting effect for my app. I want the light to reflect on the scene mesh.
i was curious if anyone has pointers on how to do this in shader graph in reality composer pro or writing a surface shader.
The effect rotates the dots as the ball spins.
This is the effect in the apple clips that applies the effect to the scene mesh
I have a plane that is stereoscopic so represents to the user depth that is beyond the plane.
I would like to have the options to render the depth buffer for the pixels or to not render any information into the depth for the plane.
I cannot see any option in Shader Graph Material to affect the depth buffer during render. I also cannot see any way in RealityKit to not render to the depth buffer for an entity.
I'm open to any suggestions.
Hello everyone, I have just started learning the development and learning of visionPro app. I have a scene called Scene, and inside it is an object called Sphere. I want to add a drag animation to this Sphere alone. I follow the code below to achieve it. But my Sphere cannot actually be dragged in the Apple simulator. What is the reason?
struct ContentView: View {
@State var enlarge = false
@State var offset: Point3D = .zero
@State var sphereEntity: Entity?
var body: some View {
RealityView { content in
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
sphereEntity = content.entities.first?.findEntity(named: "Sphere")
sphereEntity?.components.set(InputTargetComponent(allowedInputTypes: .all))
}
}.gesture(DragGesture().targetedToEntity(sphereEntity ?? Entity()).onChanged({ value in
print(value.location3D)
sphereEntity?.position = value.convert(value.location3D, from: .local, to: sphereEntity?.parent! ?? Entity())
}))
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ _ in
print("Ssssssss")
})) .onAppear() {
}
}
}
It slows down the device, screws with user interaction -- which makes exponentially worse the ridiculous one minute capture time.