Yep, this is an absolutely necessary thing for what I’d like to do on the web. Is it enough just to say I want it?
Post
Replies
Boosts
Views
Activity
Hmm. I'm okay with the user manually opting-in to eye tracking without the system needing to know in advance. However, is there a UIControl type that only works with eye tracking? I wouldn't want the user to be able to trigger it using touch or pencil — only eye dwell. It would be useful if the trigger event had an enum type for this.
If it turns out that I need a business account, then I would ask that Apple consider alternative ways for academics or individual developers to do personal-only testing for the sake of contributing to spatial computing/XR research. Camera access and the like are super important for figuring out how to advance the field of computer perception and so on, and it doesn't make sense to me to limit the access only to businesses. Worst case, it'd be great to figure something out on a case-by-case basis
Somehow I missed that. Thanks!
I guess a follow-up question is … what are some of the limitations? Can I use a custom lighting algorithm that realistically illuminates virtual objects based on interaction with the environment’s objects?e.g. do I combine the environment texture per-frame with the reconstructed mesh info? I suppose raytracing should be possible within reason, or some kind of approximation.
Looking forward to the Xcode beta that supports this! I assume if the pencil and new iPad are available next week, we won’t need to wait for WWDC?
@endecotp If it's a static mesh, isn't NP Hard fine as long as the preprocessing happens offline in some reasonable amount of time?
The solution, I think, is to decompose the concave mesh into convex meshes. Then, if this is a static mesh, then you're in luck because optimal performance doesn't matter much if you just want a result in a reasonable amount of time. Resave it as a collection of meshes for reloading in the future. If it's a dynamic mesh, you're kind of stuck doing this at runtime. I think this is a very normal thing to do. Concave collision detection is more expensive than convex.
@Matt Cox Do you get anything in passthrough mode, single app? Couldn’t you just put virtual content in the scene and still get immersion?
So I would just feed a stream of the video frames into some other decoder?
@gchiste That works. Thanks!
@eskimo I hope that bridging interop with C/C++ continues to exist, however. Especially for the sake of Metal and integration with other codebases. But for the UI platform backend, Swift does make more and more sense.
I was asking about external peripherals.
@DTS Engineer I only just saw your reply (whoops). I added to the feedback report.
@gchiste I have a similar request. Would you have advice on how best to file a report like this? I had a “conversation“ in a different thread here and and it sounded like Metal + passthrough might be tricky. In a feedback request, would you like to see some suggested solutions?
@DTS Engineer Thanks for that. The Vertex Amplification is a really nice touch! I think that this is great. At the moment the only thing I'd consider is whether a C-api / Objective-C-api template that hooks into the necessary Swift-UI code would be possible (like the WWDC sample). The swift function names and types don't always map intuitively to the C variants.