Posts

Post not yet marked as solved
1 Replies
May I have clarification on what the fastest way would be to convert a MTLTexture into a TextureResource to plug into the RealityKit APIs? This MTLTexture would be updated per-frame.
Post not yet marked as solved
2 Replies
For a suggestion to the developers, maybe add a way to request things from the system like “give me QR codes in the scene and the AR Anchors” that could be handled without giving camera data to the client application? Essentially, the system could have a bunch of system-level algorithms the developer could query. —similar to the dataproviders
Post marked as solved
5 Replies
Thank you. Wow. "flags" is seemingly intentionally hidden with how off-screen it is. Does the WebAR device API do anything like enable passthrough? As far as I know, WebAR isn't really complete.
Post marked as Apple Recommended
The release notes for Xcode 15 beta 2 (https://developer.apple.com/documentation/xcode-release-notes/xcode-15-release-notes) say that a GUI-based emulation of the crown is a missing feature, and that the workaround is to emulate it yourself using in-software function calls. There is no UI for simulating Apple Vision Pro’s immersion crown. (109429267) Workaround: Use XCTest’s XCUIDevice. rotateDigitalCrown(delta:) method. /*! * Rotate the digital crown by a specified amount. * * @param rotationalDelta * The amount by which to rotate the digital crown. A value of 1.0 represents one full rotation. * The value’s sign indicates the rotation’s direction, but the sign is adjusted based on the crown’s orientation. * Positive values always indicate an upward scrolling gesture, while negative numbers indicate a downward scrolling gesture. * */ - (void)rotateDigitalCrownByDelta:(CGFloat)rotationalDelta; PROBLEM: I can't figure out how to use the function. Xcode reports that the library does not exist when I run the application within the simulator. Maybe it's not currently possible.
Post not yet marked as solved
4 Replies
I submitted feedback: FB12419476 (Sample Code Needed for visionOS with Metal)
Post marked as solved
16 Replies
By the way, here is a good use case that requires benign use of the camera feed to enable portals: https://www.linkedin.com/posts/wevolver_portals-added-to-a-liquid-physics-ar-puzzle-ugcPost-7077629304439726080-3y_N?utm_source=share&utm_medium=member_ios
Post not yet marked as solved
4 Replies
Not an engineer, but my guess is that using Metal at the moment is really different to set-up and doesn’t involve using a straight UI/NSView-style application. Rather, you call into compositorservices as in this talk: https://developer.apple.com/videos/play/wwdc2023/10089/
Post not yet marked as solved
2 Replies
They limit mobility to 1.5m from start. However, as I write in this topic, there are tons of use cases for larger tracking areas in VR—actually most of the ones I’m interested in, and several being researched: https://developer.apple.com/forums/thread/731449 I hope that eventually it’ll be possible for the user to define a larger safety boundary for controlled environments in which it’s fine to walk around. I guess if we file feedback reports en-masse with our use cases, that is the best way to push things forward.
Post not yet marked as solved
1 Replies
Currently, Metal custom rendering and custom rendering in general are not allowed on visionOS, except in full immersive VR mode :(. For full immersive mode, you have to use compositorservices. There’s an example doc and wwdc talk ( https://developer.apple.com/videos/play/wwdc2023/10089/ ) but no sample project (yet)? For passthrough mode, there is reason to believe that standalone app full AR passthrough mode could be updated in the future to support custom rendering too, but it’s not a given. Checkout the discussion I had here starting at this post to understand the current limitations: https://developer.apple.com/forums/thread/731506?answerId=755464022#755464022 I’d suggest filing feature requests for support for custom rendering because I also think it‘s super important not to be limited to the default RealityKit renderer for passthrough mode. They want to see use cases. Personally, I think custom rendering is a given.
Post not yet marked as solved
1 Replies
The visionOS documentation here (https://developer.apple.com/documentation/visionos/creating-fully-immersive-experiences) states: " When you start a fully immersive experience, visionOS defines a system boundary that extends 1.5 meters from the initial position of the person’s head. If their head moves outside of that zone, the system automatically stops the immersive experience and turns on the external video again. This feature is an assistant to help prevent someone from colliding with objects. " For the reasons above, it would be great to enable larger user-defined tracking areas for controlled environments in which it's known to be safe to walk around. e.g. an open classroom, lab, hallway, museum space, medical space, or larger work ir play places. The user could start in passthrough mode and use gestures to define a rectangular boundary on the floor around them larger than the 1.5 meters.
Post marked as solved
10 Replies
@eskimo I've submitted another bug report regarding the missing line jumping outside of swift: FB12340549
Post marked as solved
16 Replies
Fair enough. I'm glad you're all thinking about this issue and how to improve it based on the feedback. I do think that to scale-up and support those nicer advanced graphics, the sort of freer open system might be necessary. I understand that it's a challenging problem: balancing security and flexibility. It makes sense in the short term to get something out there. I agree, I can try the simpler solutions in the meantime. One more idea: if you also wanted to support multiple apps open in a shared space, but within a safe space that won't clobber other apps or be visually inconsistent, you could use something like app groups. A developer could specify a group of apps that look visually appealing and consistent together, and which work together well and under their control without risk of interacting with something external that is malicious. But I see this platform evolving in steps of course! Once the tag for visionOS comes out, if it's alright, I'll just officially file all of this feedback in more of a shortform. Anyway, thanks again for discussing the problem. I appreciate being able to understand it at a deeper level.
Post marked as solved
16 Replies
Throwing out another idea: in full ar mode in which just the one application is running, couldn’t sandboxing and compositorservices be used like in full VR mode to isolate the process to make custom GPU code safe again? In shared mode, I agree it makes sense to have a unified renderer. To do custom lighting and enable occlusion, the GPU code could be sandboxed as well so reading the pixel data for lighting and depth occlusion happens in say a fixed function or MTLFunction pointer callback. Or, maybe protections could be put on the drawable and privaxy-sensitive textures so they can only be read on the GPU, but never written or copied/blitted in a shader or on the cpu. I’d be pretty happy with some sort of fixed funcrion to enable these things, but yes, I think the only way for custom code to ever work in the near future would be to enable it in full immersive AR mode with some sandboxing and protections. I’m not sure how lighting, transparency and occlusion would work without letting the custom code see the textures though. I get that’s a problem. Again, maybe sandboxing, copy protections, and user permissions are required. It’s tricky. A lot of simple, but beautiful effects seem impossible without some form of access, like SDFs and Raymarching with environment lighting: https://github.com/quantumOrange/ARRay I’d be great if eventually we could get to a place where we could do this freely.
Post marked as solved
10 Replies
Ah I guess the iPasOS version needs to be 17. For the C/C++, I assumed the issue had something to do with a dependency on the objc runtime or something, which would explain why the click to jump wasn’t working. —or maybe the compiler’s confused by all of my cross-language craziness. :) I’ll double-check.
Post marked as solved
16 Replies
Right, it’s good that DrawableQueue exists. I’d also like to be able to generate meshes dynamically to simulate vertex animations, but I’ve not seen a DrawableQueue equivalent for same-frame generation of buffer data to update for a RealityKit mesh. Is it possible? I’m not sure the bridge really exists for that. If it did, that would be an improvement to RealityKit for sure. Actually it would be really helpful to see an example of how to do this sort of synchronization. The reason you need to update meshes and not just textures is that I believe this is the only way you could still compose with the passthrough video and occlusion. Otherwise it’s just going to render flat images. A direct link between say a MTLBuffer region for vertices and indices and a RealityKit mesh would make things pretty okay. It makes me think a much simpler bare-bones API could be made for streaming custom Metal code results into a secure fixed function pipeline of sorts. Either way, I don’t think this is a long-term solution, but but it’s something. I’ll still file feedback when the visionOS tag appears, as you suggested. EDIT: Couldn’t I just use completion handlers on Metal and RealityKit? I’m not sure if RealityKit lets you get a completion handler or block it as you normally would maybe block metal using semaphores. —if I can just tell RealityKit to render on command. That would achieve what I’m looking for. —but the extra copying from buffers wouldn’t be as good as a direct link between metal buffers and RealityKit meshes.