I think it would be great for a future OS version to find a way to extend Metal custom renderers to passthrough mode. Specifically the mode with one application — not shared space — for simplicity and sandboxing from other apps. This would allow for many shader effects that are impossible with just surface shaders.
Note—without asking for access to the camera. Just some way to take advantage of occlusion and seeing the real world automatically. I imagine you’d need to disable pixel readback or opaquely insert the privacy-restricted textures.
Post
Replies
Boosts
Views
Activity
@eskimo BrowserKit would be perfect, and is in fact overkill, but I guess this is just in the EU due to the new regulations. Too bad. I really just wanted to use JS as a stand-in for a scripting layer like Lua.
It's unclear however: does BrowserKit even exist beyond iOS (only that is listed) and does it fail to work even if I am not uploading to the app store? For example, there could be utility in having a web-based scripting layer just for local development.
I want the ability to use hooks into native code for functionality that doesn’t exist in the safari browser, so wkwebview doesn’t work.
For example, WebXR with AR passthrough does not exist, even with flags.
Passing info from native to wkwebview incurs too much latency for working with interactive time (it’s been shown).
So I think the only open is JSCore. Really I just want to be able to script with optimized JS and don’t need the browser part.
Chiming in to say that this would be an excellent reason to support custom Metal shaders in the future to allow for easier porting of applications like this.
Wouldn’t it make sense to save windows’ positions relative to each other under a root hierarchy, rather than having them overlap upon re-launching the app? You could have the windows appear relative to the user’s gaze, with the local hierarchy preserved.
In other words, in the visionOS internals, save the cluster of windows under a root transform that saves the positions. When the user returns to the app, restore the windows relative to the user’s gaze, but use the saved hierarchy of windows to position them as they were before, just repositioned with respect to the user’s new initial viewpoint.
I’m not familiar enough to help with the problem, I suppose. I agree, it’s weird.
@AndrewKaster Wait are we sure Apple clang supports C++ modules? If I recall, there is only partial support, meaning I wouldn‘t expect it to work so well now. I suspect your solution is not to use modules.
Have you changed the compiler flags in the project settings/targets? If std=c++20 doesn’t work, try 2b.
Also, check the feature support pages.
char8_t requires Xcode 15.
I need a solution that uses textures I’ve created using a regular Metal renderer, not textures that are drawables. i.e. I need to arbitrary size textures (could be many of them) that could be applied in a realitykit scene. If DrawableQueue is usable somehow for this case (arbitrary resolution textures, many of them, per-frame updating), would someone have an example? The docs do not show anything specific. Thanks!
Might an engineer comment? I wonder if this is a reasonable feature request or it this is a strong limitation.
Currently it’s only in single app, VR / no-passthrough mode only. Look for compositorservices.
If you want Metal with passthrough and not just through RealityKit, that’s unavailable (unfortunately in my opinion). Please send feedback requests with specific use cases if you want that.
To rephrase, in AR mode, you can walk around, is that correct? In VR mode, you are limited to the 1.5 for now. So for AR mode with the passthrough video on, in which that limit is not imposed, how far from the origin are you allowed to walk around? Is there a limit, or is it essentially boundless and the tracking will hold for arbitrary spaces, or is it limited to some maximum boundary based on tracking limits?
Firstly, did you profile why the vertices are expensive to compute before going for this solution?
Also, it's unclear how you're computing the vertices since you haven't provided code or an algorithm for that part, so it's hard to tell if you're doing the compute step optimally. Successfully using compute relies heavily on taking advantage of parallelism, so make sure it makes sense to use a compute kernel.
Roughly, I can imagine you can allocate one gigantic buffer -- no need for multiple. Conceptually split the buffer into some fixed-size sections (X vertices) that are handled by some specified number of threads. You can tune this size.
Beyond that, it's tricky to help, but maybe with more specific info, it'll be easier.
Anyone? Is there a limit, or is it boundless? This is a pretty important thing to know.
@MobileTen I need to print what I need to print. It’s user-generated content, and I don’t intend to create a gui viewer at the moment. The question is whether the output is meant to have the same limit as stdio, or still less? I’m aware that it’s less than it should be given that it’s a known bug.
Thanks!