Posts

Post not yet marked as solved
3 Replies
@Matt Cox I have similar qualms about the limitations on custom rendering. I think a lot of this could be partially-solved by, as you suggest, allowing for mesh streaming as opposed to just texture streaming. A better solution would be permitting custom metal rendering outside of fully-immersive mode. I can imagine composition services + Metal having special visionOS CPU-side Metal calls that allow the programmer to specify where to render the camera data/what to occlude. For custom shaders (which we really will need at some point since surface shaders are pretty limiting), there'd need proper sandboxing so reading the color/depth of the camera couldn't leak back to the CPU. Some kind of Metal-builtin read/function-pointer support? I think you ought to file a feature request, for what it's worth. We're not the only ones who've raised this point. Pointing to specific examples probably helps a bit.
Post not yet marked as solved
4 Replies
The solution, I think, is to decompose the concave mesh into convex meshes. Then, if this is a static mesh, then you're in luck because optimal performance doesn't matter much if you just want a result in a reasonable amount of time. Resave it as a collection of meshes for reloading in the future. If it's a dynamic mesh, you're kind of stuck doing this at runtime. I think this is a very normal thing to do. Concave collision detection is more expensive than convex. Note: I think it would be useful to include some built-in algorithm for handling this both in Metal Performance and RealityKit APIs. (Maybe file a feature request?)
Post not yet marked as solved
4 Replies
Are you able to access the mesh vertices, indices, etc with the API? Worst case you could create convex meshes from the concave mesh yourself.
Post not yet marked as solved
4 Replies
Custom metal rendering pipelines in single-app passthrough mode with occlusion/depth, custom shaders, composition with the real world. (Note: not asking for camera access — this could be handled by the OS backend however possible. For example, sandbox pixel read in the shaders to avoid insecure access to the data on the CPU.)
Post not yet marked as solved
3 Replies
I think it would be great for a future OS version to find a way to extend Metal custom renderers to passthrough mode. Specifically the mode with one application — not shared space — for simplicity and sandboxing from other apps. This would allow for many shader effects that are impossible with just surface shaders. Note—without asking for access to the camera. Just some way to take advantage of occlusion and seeing the real world automatically. I imagine you’d need to disable pixel readback or opaquely insert the privacy-restricted textures.
Post not yet marked as solved
4 Replies
@eskimo BrowserKit would be perfect, and is in fact overkill, but I guess this is just in the EU due to the new regulations. Too bad. I really just wanted to use JS as a stand-in for a scripting layer like Lua. It's unclear however: does BrowserKit even exist beyond iOS (only that is listed) and does it fail to work even if I am not uploading to the app store? For example, there could be utility in having a web-based scripting layer just for local development.
Post not yet marked as solved
4 Replies
I want the ability to use hooks into native code for functionality that doesn’t exist in the safari browser, so wkwebview doesn’t work. For example, WebXR with AR passthrough does not exist, even with flags. Passing info from native to wkwebview incurs too much latency for working with interactive time (it’s been shown). So I think the only open is JSCore. Really I just want to be able to script with optimized JS and don’t need the browser part.
Post not yet marked as solved
4 Replies
Wouldn’t it make sense to save windows’ positions relative to each other under a root hierarchy, rather than having them overlap upon re-launching the app? You could have the windows appear relative to the user’s gaze, with the local hierarchy preserved. In other words, in the visionOS internals, save the cluster of windows under a root transform that saves the positions. When the user returns to the app, restore the windows relative to the user’s gaze, but use the saved hierarchy of windows to position them as they were before, just repositioned with respect to the user’s new initial viewpoint.
Post not yet marked as solved
5 Replies
I need a solution that uses textures I’ve created using a regular Metal renderer, not textures that are drawables. i.e. I need to arbitrary size textures (could be many of them) that could be applied in a realitykit scene. If DrawableQueue is usable somehow for this case (arbitrary resolution textures, many of them, per-frame updating), would someone have an example? The docs do not show anything specific. Thanks!
Post marked as solved
2 Replies
Currently it’s only in single app, VR / no-passthrough mode only. Look for compositorservices. If you want Metal with passthrough and not just through RealityKit, that’s unavailable (unfortunately in my opinion). Please send feedback requests with specific use cases if you want that.
Post not yet marked as solved
3 Replies
To rephrase, in AR mode, you can walk around, is that correct? In VR mode, you are limited to the 1.5 for now. So for AR mode with the passthrough video on, in which that limit is not imposed, how far from the origin are you allowed to walk around? Is there a limit, or is it essentially boundless and the tracking will hold for arbitrary spaces, or is it limited to some maximum boundary based on tracking limits?
Post not yet marked as solved
6 Replies
Firstly, did you profile why the vertices are expensive to compute before going for this solution? Also, it's unclear how you're computing the vertices since you haven't provided code or an algorithm for that part, so it's hard to tell if you're doing the compute step optimally. Successfully using compute relies heavily on taking advantage of parallelism, so make sure it makes sense to use a compute kernel. Roughly, I can imagine you can allocate one gigantic buffer -- no need for multiple. Conceptually split the buffer into some fixed-size sections (X vertices) that are handled by some specified number of threads. You can tune this size. Beyond that, it's tricky to help, but maybe with more specific info, it'll be easier.