Post

Replies

Boosts

Views

Activity

Reply to Metal API on visionOS?
I think it would be great for a future OS version to find a way to extend Metal custom renderers to passthrough mode. Specifically the mode with one application — not shared space — for simplicity and sandboxing from other apps. This would allow for many shader effects that are impossible with just surface shaders. Note—without asking for access to the camera. Just some way to take advantage of occlusion and seeing the real world automatically. I imagine you’d need to disable pixel readback or opaquely insert the privacy-restricted textures.
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
@eskimo BrowserKit would be perfect, and is in fact overkill, but I guess this is just in the EU due to the new regulations. Too bad. I really just wanted to use JS as a stand-in for a scripting layer like Lua. It's unclear however: does BrowserKit even exist beyond iOS (only that is listed) and does it fail to work even if I am not uploading to the app store? For example, there could be utility in having a web-based scripting layer just for local development.
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
I want the ability to use hooks into native code for functionality that doesn’t exist in the safari browser, so wkwebview doesn’t work. For example, WebXR with AR passthrough does not exist, even with flags. Passing info from native to wkwebview incurs too much latency for working with interactive time (it’s been shown). So I think the only open is JSCore. Really I just want to be able to script with optimized JS and don’t need the browser part.
Feb ’24
Reply to Restore window positions on visionOS
Wouldn’t it make sense to save windows’ positions relative to each other under a root hierarchy, rather than having them overlap upon re-launching the app? You could have the windows appear relative to the user’s gaze, with the local hierarchy preserved. In other words, in the visionOS internals, save the cluster of windows under a root transform that saves the positions. When the user returns to the app, restore the windows relative to the user’s gaze, but use the saved hierarchy of windows to position them as they were before, just repositioned with respect to the user’s new initial viewpoint.
Aug ’23
Reply to How to Convert a MTLTexture into a TextureResource?
I need a solution that uses textures I’ve created using a regular Metal renderer, not textures that are drawables. i.e. I need to arbitrary size textures (could be many of them) that could be applied in a realitykit scene. If DrawableQueue is usable somehow for this case (arbitrary resolution textures, many of them, per-frame updating), would someone have an example? The docs do not show anything specific. Thanks!
Aug ’23
Reply to Metal and VisionOS
Currently it’s only in single app, VR / no-passthrough mode only. Look for compositorservices. If you want Metal with passthrough and not just through RealityKit, that’s unavailable (unfortunately in my opinion). Please send feedback requests with specific use cases if you want that.
Jul ’23
Reply to On vision pro, what is the max walkable distance in unbounded passthrough mode?
To rephrase, in AR mode, you can walk around, is that correct? In VR mode, you are limited to the 1.5 for now. So for AR mode with the passthrough video on, in which that limit is not imposed, how far from the origin are you allowed to walk around? Is there a limit, or is it essentially boundless and the tracking will hold for arbitrary spaces, or is it limited to some maximum boundary based on tracking limits?
Jul ’23
Reply to Generating vertex data in compute shader
Firstly, did you profile why the vertices are expensive to compute before going for this solution? Also, it's unclear how you're computing the vertices since you haven't provided code or an algorithm for that part, so it's hard to tell if you're doing the compute step optimally. Successfully using compute relies heavily on taking advantage of parallelism, so make sure it makes sense to use a compute kernel. Roughly, I can imagine you can allocate one gigantic buffer -- no need for multiple. Conceptually split the buffer into some fixed-size sections (X vertices) that are handled by some specified number of threads. You can tune this size. Beyond that, it's tricky to help, but maybe with more specific info, it'll be easier.
Jul ’23