I stumbled upon the same limitations. I decided I'll just make two versions of my app, leave the choice up to the user:
have them choose a volume, and they'll need to put it somewhere but can multi-task
have them choose a full space, and my app will auto anchor to their table, but they cannot multi-task anymore
I don't think there's much more we can do with these current limitations. Maybe next year things will be improved and limitations might get lifted.
In the simulator, it seems that when I drag the volume around to try to place it on a surface, geometry inside of a RealityView can clip through “real” objects. Is this the expected behavior on a real device too?
as far as my understanding goes, I think yes, this is the expected behaviour
If so, could using ARKit in a Full Space to position the volume, then switching back to a Shared Space, be an option?
as far as my understanding goes, I think no, because I don't think we can anchor volumetric windows. that's the main limitation. So you'd have to work with just plain entities that you anchor, but then switching back to a volume will make it so those entities disappear.
Also, if the app is closed, and reopened, will the volume maintain its position relative to the user’s real-world environment?
I think the volume will stay where the user put it yes, but this is at apple's regression on how they choose to implement the behaviour of volumes in the shared space. it might change at any time and developers won't be able to have a say in it. Think of it as, we don't know where on a macbook screen and what size a window is exactly located at. Same for a volumetric window.
Post
Replies
Boosts
Views
Activity
As far as I understand the current implementation for RealityKit/ARKit, you cannot auto-anchor Window Groups / windows / volumes to detected surfaces.
You can only anchor entities inside of an immersive space. (you can however use mixed or full immersion, but you cannot use the shared space)
This means, any auto-anchoring by the app will eliminate the possibility to multi-task for the user.
I believe you can achieve this by making a separate usda file of your entity: right click the entity, and choose "New Scene From Selection".
Make sure you have 1 root transform. This root's origin will be at the center of the scene. Now you drag your model entity (a child of this root) to the spot you want.
Then in the original file, your entity has become a reference to that new scene you made. But rotating it now should keep the origin as per your new scene.
@Gong thanks for the message. But I'm still not sure when we should apply grounding shadow, and to which entities.
Did you check out the Happy Beam sample project? You can pretty much reference that to do what you wanna do.
@gchiste What if we have a transform with 3 ModelEntities inside in reality composer pro. Something like this:
and since the three model entities represent a single object, can I execute applyImpulse to the "transform" layer named Body or do I have to do it to each individual model entity? (those three nested ones)
Because I believe my transform layer called "Body" is technically an Entity, and not a ModelEntity, right?
@kevyk yes, but the issue was that applyLinearImpulse doesn't exists on the parent entity. I was able to get around it by doing this:
let parentEntityHasPhyics = parentEntity as? HasPhysicsBody {
parentEntityHasPhyics.applyLinearImpulse(...) // now we can use applyLinearImpulse without getting a compile-time error
}
How about these:
https://developer.apple.com/documentation/realitykit/modifying-realitykit-rendering-using-custom-materials
and
https://developer.apple.com/metal/Metal-RealityKit-APIs.pdf
I hope it helps
@gchiste
Do you have other examples of IBL that I can reference to try and understand how to better add lighting to my scenes for the full immersion mode?
I followed the video and checked the example code.
I also found this example code on Stack Overflow: https://stackoverflow.com/questions/76755793/spotlightcomponent-is-unavailable-in-visionos/76761509#76761509
I have a scene with a bunch of lights in Blender, but none of them auto translate to IBL and I'm not sure how/where to start learning about how to convert my Blender lights to IBL for visionOS.
I can't find any other examples to better understand how to create an IBL for a lantern here or the sun there as I can see it in my Blender scene. Any official direction from Apple would be greatly appreciated!
@mzoob Can you please share the image you used for "shiwai_a" ? I'm having difficulties creating the correct .exr/.hdr/.png System IBL Texture to achieve the same lighting as I see it in Blender in my room & garden. :sweat_smile:
GroundingShadowComponent(castsShadow: true) needs to be applied to the first layer you have that actually has the mesh. In Apple language sometimes called the ModelEntity.
Probably bigDonut has an extra /Root layer, so you need to dig into the children, find the ModelEntity and apply the shadow there.
@Wittmason with "shadow baking" I meant "light baking". What I really mean is that in the Blender renderer (the screenshots at the bottom) you can see beautiful shadows. They are not generated when running on the Vision Pro.
An IBL will "tint" our assets' colour, but it will not add shadows. I think the only solution to have shadows is to replace the textures with textures that have those shadows baked in. I guess they call this "light baking" in Unity/Unreal, but Blender doesn't have a good workflow for this.
My current understanding:
an asset texture as a JPG is sometimes a smaller image used in repeated pattern on an asset
other times an asset texture is just colour/texture data, not using any jpg
In consideration of the above, what I understand the limitations of light baking to be:
When baking light onto a JPG texture (so you can see a shadow as part of that texture), then you basically need one big texture jpg per asset and cannot use any "repeated pattern" any more.
When baking light onto a texture that doesn't use any JPG but instead colour/texture data, then a newly JPG will need to be generated for that.
The above 1 & 2 makes it so the usdz file size will grow tremendously because of the new and large sizes texture JPGs...
I believe that Unity/Unreal therefore don't "bake light onto a texture" but instead they create a "lightmap" which is another layer that can be added on top of an asset with data on which parts need to be lightened/darkened. I am guessing this is a more economical method to do so, but I have no idea how these lightmaps are supported by RealityKit / Vision Pro.
To answer my own post with my current understanding on things:
1- Does IBL need to be BW or will colour work?
It can be in colour. As an example you can use Reality Converter to test multiple different IBLs, some b/w some coloured:
2- What is the best file format for IBL? Any pros/cons? Or should we just test out each format and check visually. From my tests: PNG, OpenEXR (.exr), Radiance HDR (.hdr) all work. But which format is recommended?
It "depends". Just test and see until you have the ambience you are looking for.
3- Will IBL on visionOS create shadows for us? In Blender an HDRI gives shadows.
No. There is currently no clear information what the best way is to approach adding shadows to a scene in RealityKit.
4 Looking at a scene in Blender which uses HDRI as global lighting, how can we best "prep" the IBL image that will give the closest light similar to Blender's Cycles rendering engine?
Just test and see until you have the ambience you are looking for.
@Lucky7 Did you figure it out? I was wondering the same thing.
I have the same issue. Does this mean that any visionOS apps cannot use Xcode Cloud?