Post

Replies

Boosts

Views

Activity

Reply to Sample Project for WWDC24 10092 Metal with Passthrough?
Unfortunately, the example doesn't show how to integrate scene understanding for realistic lighting. It just seems to show how to add passthrough in the background. Is there an example that is more advanced, showing how to do occlusion, use the environment texture, do lighting with the scene reconstructed mesh, etc.? If not, that's super needed. It's not so straightforward.
Jun ’24
Reply to Researcher in Spatial Computing / HCI Looking to Use Enterprise APIs on Vision Pro for HCI Research-Only.
@DTS Engineer I see this: "Your account can’t access this page. There may be certain requirements to view this content. If you’re a member of a developer program, make sure your Account Holder has agreed the latest license agreement." Is this link actually live, or is it planned to work after WWDC? This is why I thought I needed to be a business — I don't see a way to gain access. I am just an individual who wants to do purely internal research via collaboration with my university. I 100% understand these APIs need to be used with care, and I don't intend to sell or distribute this specific part of the research potentially using these APIs.
Jun ’24
Reply to Researcher in Spatial Computing / HCI Looking to Use Enterprise APIs on Vision Pro for HCI Research-Only.
@sanercakir I clicked the developer only request button and it says I am not allowed to view the page. But I am not the account holder. I assumed that I needed to be an enterprise with 100+ employees and so on. By the way, I am an individual account holder. Might I need to be a “business?” In any case, please let me know how to resolve this, if I need to contact a certain department.
Jun ’24
Reply to Eye tracking permission
@Matt Cox I have similar qualms about the limitations on custom rendering. I think a lot of this could be partially-solved by, as you suggest, allowing for mesh streaming as opposed to just texture streaming. A better solution would be permitting custom metal rendering outside of fully-immersive mode. I can imagine composition services + Metal having special visionOS CPU-side Metal calls that allow the programmer to specify where to render the camera data/what to occlude. For custom shaders (which we really will need at some point since surface shaders are pretty limiting), there'd need proper sandboxing so reading the color/depth of the camera couldn't leak back to the CPU. Some kind of Metal-builtin read/function-pointer support? I think you ought to file a feature request, for what it's worth. We're not the only ones who've raised this point. Pointing to specific examples probably helps a bit.
Mar ’24
Reply to Non-convex collision?
The solution, I think, is to decompose the concave mesh into convex meshes. Then, if this is a static mesh, then you're in luck because optimal performance doesn't matter much if you just want a result in a reasonable amount of time. Resave it as a collection of meshes for reloading in the future. If it's a dynamic mesh, you're kind of stuck doing this at runtime. I think this is a very normal thing to do. Concave collision detection is more expensive than convex. Note: I think it would be useful to include some built-in algorithm for handling this both in Metal Performance and RealityKit APIs. (Maybe file a feature request?)
Mar ’24