Post

Replies

Boosts

Views

Activity

Reply to Can visionOS app be used to scan QRCode?
For a suggestion to the developers, maybe add a way to request things from the system like “give me QR codes in the scene and the AR Anchors” that could be handled without giving camera data to the client application? Essentially, the system could have a bunch of system-level algorithms the developer could query. —similar to the dataproviders
Jul ’23
Reply to Using digital crown on Vision Pro Simulator
The release notes for Xcode 15 beta 2 (https://developer.apple.com/documentation/xcode-release-notes/xcode-15-release-notes) say that a GUI-based emulation of the crown is a missing feature, and that the workaround is to emulate it yourself using in-software function calls. There is no UI for simulating Apple Vision Pro’s immersion crown. (109429267) Workaround: Use XCTest’s XCUIDevice. rotateDigitalCrown(delta:) method. /*! * Rotate the digital crown by a specified amount. * * @param rotationalDelta * The amount by which to rotate the digital crown. A value of 1.0 represents one full rotation. * The value’s sign indicates the rotation’s direction, but the sign is adjusted based on the crown’s orientation. * Positive values always indicate an upward scrolling gesture, while negative numbers indicate a downward scrolling gesture. * */ - (void)rotateDigitalCrownByDelta:(CGFloat)rotationalDelta; PROBLEM: I can't figure out how to use the function. Xcode reports that the library does not exist when I run the application within the simulator. Maybe it's not currently possible.
Jun ’23
Reply to Immersion space becoming inactive on stepping outside of the "system-defined boundary"
They limit mobility to 1.5m from start. However, as I write in this topic, there are tons of use cases for larger tracking areas in VR—actually most of the ones I’m interested in, and several being researched: https://developer.apple.com/forums/thread/731449 I hope that eventually it’ll be possible for the user to define a larger safety boundary for controlled environments in which it’s fine to walk around. I guess if we file feedback reports en-masse with our use cases, that is the best way to push things forward.
Jun ’23
Reply to `MTKView` on visionOS
Not an engineer, but my guess is that using Metal at the moment is really different to set-up and doesn’t involve using a straight UI/NSView-style application. Rather, you call into compositorservices as in this talk: https://developer.apple.com/videos/play/wwdc2023/10089/
Jun ’23
Reply to Custom renderpipeline & shader confusion
Currently, Metal custom rendering and custom rendering in general are not allowed on visionOS, except in full immersive VR mode :(. For full immersive mode, you have to use compositorservices. There’s an example doc and wwdc talk ( https://developer.apple.com/videos/play/wwdc2023/10089/ ) but no sample project (yet)? For passthrough mode, there is reason to believe that standalone app full AR passthrough mode could be updated in the future to support custom rendering too, but it’s not a given. Checkout the discussion I had here starting at this post to understand the current limitations: https://developer.apple.com/forums/thread/731506?answerId=755464022#755464022 I’d suggest filing feature requests for support for custom rendering because I also think it‘s super important not to be limited to the default RealityKit renderer for passthrough mode. They want to see use cases. Personally, I think custom rendering is a given.
Jun ’23
Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
Fair enough. I'm glad you're all thinking about this issue and how to improve it based on the feedback. I do think that to scale-up and support those nicer advanced graphics, the sort of freer open system might be necessary. I understand that it's a challenging problem: balancing security and flexibility. It makes sense in the short term to get something out there. I agree, I can try the simpler solutions in the meantime. One more idea: if you also wanted to support multiple apps open in a shared space, but within a safe space that won't clobber other apps or be visually inconsistent, you could use something like app groups. A developer could specify a group of apps that look visually appealing and consistent together, and which work together well and under their control without risk of interacting with something external that is malicious. But I see this platform evolving in steps of course! Once the tag for visionOS comes out, if it's alright, I'll just officially file all of this feedback in more of a shortform. Anyway, thanks again for discussing the problem. I appreciate being able to understand it at a deeper level.
Jun ’23
Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
Throwing out another idea: in full ar mode in which just the one application is running, couldn’t sandboxing and compositorservices be used like in full VR mode to isolate the process to make custom GPU code safe again? In shared mode, I agree it makes sense to have a unified renderer. To do custom lighting and enable occlusion, the GPU code could be sandboxed as well so reading the pixel data for lighting and depth occlusion happens in say a fixed function or MTLFunction pointer callback. Or, maybe protections could be put on the drawable and privaxy-sensitive textures so they can only be read on the GPU, but never written or copied/blitted in a shader or on the cpu. I’d be pretty happy with some sort of fixed funcrion to enable these things, but yes, I think the only way for custom code to ever work in the near future would be to enable it in full immersive AR mode with some sandboxing and protections. I’m not sure how lighting, transparency and occlusion would work without letting the custom code see the textures though. I get that’s a problem. Again, maybe sandboxing, copy protections, and user permissions are required. It’s tricky. A lot of simple, but beautiful effects seem impossible without some form of access, like SDFs and Raymarching with environment lighting: https://github.com/quantumOrange/ARRay I’d be great if eventually we could get to a place where we could do this freely.
Jun ’23
Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
Right, it’s good that DrawableQueue exists. I’d also like to be able to generate meshes dynamically to simulate vertex animations, but I’ve not seen a DrawableQueue equivalent for same-frame generation of buffer data to update for a RealityKit mesh. Is it possible? I’m not sure the bridge really exists for that. If it did, that would be an improvement to RealityKit for sure. Actually it would be really helpful to see an example of how to do this sort of synchronization. The reason you need to update meshes and not just textures is that I believe this is the only way you could still compose with the passthrough video and occlusion. Otherwise it’s just going to render flat images. A direct link between say a MTLBuffer region for vertices and indices and a RealityKit mesh would make things pretty okay. It makes me think a much simpler bare-bones API could be made for streaming custom Metal code results into a secure fixed function pipeline of sorts. Either way, I don’t think this is a long-term solution, but but it’s something. I’ll still file feedback when the visionOS tag appears, as you suggested. EDIT: Couldn’t I just use completion handlers on Metal and RealityKit? I’m not sure if RealityKit lets you get a completion handler or block it as you normally would maybe block metal using semaphores. —if I can just tell RealityKit to render on command. That would achieve what I’m looking for. —but the extra copying from buffers wouldn’t be as good as a direct link between metal buffers and RealityKit meshes.
Jun ’23