Post

Replies

Boosts

Views

Activity

Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
Come to think of it: even if Metal cannot currently be used to do the final render in passthrough mode, am I still allowed to use Metal in passthrough mode for other things, period? I thought one thing I could conceivably do is use Metal to generate textures and meshes using vertex, fragment, and compute kernels in passes in my own command buffer, and then give the results to realitykit in the form of buffers and textures. That would be a temporary and clunky solution, but at least it could be workable and let me program some custom shaders. But if the system blocks Metal outright in that mode, then that’s a really strong limitation. Please let me know. It’s all research to figure out what can and cannot be done now. If this route is possible, however, then how would a Metal command buffer be synched with RealityKit so I could feed textures and buffer data generated in Metal to RealityKit in the same frame? Thanks!
Jun ’23
Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
I think I understand. Are you referring to the case in which we're in a shared space with several different applications running in the same environment in the "shared space?" Correct me if I am wrong, but additionally Isn't there also an AR passthrough mode in which just one application is running at one time -- just as there's a full immersive mode that uses CompositorServices? If so, then that might be a starting place. If Metal could run with passthrough enabled with the restriction that the application is the only thing running, then that would make sense as a restriction. That is, it makes sense that you'd have more control over the visual style when you're not potentially conflicting with the visual styles of other applications, and also from a security standpoint when your application is isolated from others. The challenge would then be less about security, but rather about how to support a custom raytracing / lighting function and occlusion with the camera passthrough enabled. I think this is an easier problem to solve because it's just more composition and perhaps extensions to Metal (speculating). Yes, I understand about the need for more specific use cases. I felt I needed more context behind why the current system behaves the way it does. For the sake of putting it in-writing here for others who might read this too -- while it's fresh in-mind (before the visionOS feedback tag appears), a lot of the use cases I have are more general and philosophical. Higher-level general needs: Many people have existing renderers in Metal that they want to reuse. Metal allows for the use and experimentation with new and different graphics algorithms that can help optimize for specific needs. There is a large community of graphics programmers who prefer to "do the work themselves," not only for optimization, but also for control over 1> the visual style and expression of the rendering, and 2> the data structures for the renderer and the program that feeds data to the renderer. There isn't a "one-size fits all." For example, I often need to create a lot of per-frame procedural generated meshes. RealityKit prefers static meshes it seems. The suggestion I've received is to create a new MeshResource per frame. This does not scale well. Apple probably should not be burdened with implementing new algorithms every time something new comes out. It doesn't scale well to put everything on Apple's renderer. I think that one should choose to use the RealityKit renderer or make their own if RealityKit doesn't fit the person's needs. My Use Cases Procedural generation of meshes, textures, and assets. Use of bindless rendering. Use of render to texture. All of these are cumbersome in RealityKit at the moment despite the existence of things like DrawableQueue. Extra work needs to be done to synchronize and generate assets. Lots of copying around, it seems. Overall, there's a lot of friction involved. I want to be able to do vertex transformations in the shader in arbitrary ways, which currently only possible in either Metal or CustomMaterial. I want to use traditional render-to-texture, masking, scissoring, etc. but RealityKit makes this hard and unintuitive. RealityKit has its own entity-component / object system. My projects already have their own entity systems. Mapping to RealityKit's way of storing and managing data is a lot of redundant work and potentially non-optimal for performance. There's a lot of cognitive overhead to forcing my project to conform to this specific renderer when my own renderer and engine are optimized for what I want to make. (i.e. it would be useful to decouple a lot of the higher-level functionality in RealityKit.) For spatial computing / VR, I want the ability to move around the environment, which in Vision Pro's case, is only possible in AR mode. This is a complementary issue. If VR/immersive mode were eventually to permit walking around in some future version, then that would be great. As in the general cases above, I have interest in creating stylistically-nuanced graphics that don't necessarily map with what RealityKit's lighting models are doing. I want creative freedom. Overall, I do like the idea of having something like RealityKit available, but if Metal or a slightly modified version of Metal with extensions or restrictions were made available, that would make it easier to create unique and optimized applications on the platform. Anyway, thanks again for your time.
Jun ’23
Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
Thank you for clarifying the core issue. I really appreciate it. It does seem like a hard-to-solve problem and I'm not about to request a rewrite of the system architecture. My follow-up question would be -- how can Metal be used with the compositor-services for full VR mode, but not for passthrough mode? I wonder what the difference is. Is it because in passthrough mode you have the additional need to compose with the camera feed and do occlusion, and THAT's where the issue comes up? It sounds like in order to solve the problem, user-side rendering would need to be sandboxed away from the main simulation in step 1, and then results (not code, but the textures and resources, etc.) could be passed to the shared simulation to be composed with other things. (I am not an OS person, so forgive me if I'm showing a fundamental misunderstanding of how this works.) Well, I hope that somehow the restrictions could be lessened, possibly through some additional sandboxing or moving the userside rendering to some secure space. Is it a problem that even can be solved with time? I'll reference this in a feedback report if it helps. Thanks again for explaining and your patience.
Jun ’23
Reply to Xcode 15 - Structured logging in console hides interpolation?
I had the same question and ended up discussing it here ( https://developer.apple.com/forums/thread/731121 ) and filing a report. Check the discussion to resolve the issue without the public modifier, if you'd like. In short, string variables are private by default unless you add a plist entry for a specific logger subsystem and category or use other means. It's a bug, however, that the printout for private arguments in the Xcode console is blank. It should print <private> in-place of the argument.
Jun ’23
Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
Ah, that is really quite a shame and limiting. I mentioned some of the use cases above (stylistic choice, custom geometry offsets and shading, custom algorithms). Yes, I can file feedback. The high-level feedback, while on the subject, would be to enable lower level control via metal shaders and / or custom materials. However, can you/Apple say anything more about the reasons why these limitations are currently in-place? i.e. what are some of the problems that are making arriving at a solution with more rendering control hard? For example, is it related to privacy or something in the way the OS requires rendering to be done? Is Apple also thinking about how to improve the situation? (Again, fully understood this is a V1 product.) That way, I could think more about my feedback to see if I could arrive at some reasonable ideas and suggestions, rather than simply requesting a feature with nothing else to contribute. I'm pretty invested in this platform already and would like to see if grow to support more creative rendering and interactive experiences. I'm pretty sure that the limitations will be a bit too much down the line.
Jun ’23
Reply to Ground Shadows for in-Program-Generated Meshes in RealityKit
Thanks! I’m glad RealityKit will have more updates to come. On the subject of RealityKit and VisionOS, I did want to raise the following (pardon the walls of text): My note about procedural generation relates to generating geometry programmatically. I am admittedly concerned about RealityKit being “The” way for rendering in visionOS when passthrough AR is enabled. There are a lot of things that are actually more straightforward in a plain Metal application like render to texture or just updating buffers per frame. During WWDC, a suggestion was just to recreate a MeshResource per-frame, but that seems like a lot of work just to do something that would be very simple in plain Metal. I do have concerns about the choice to limit the renderer. Mainly, I don’t think I should have to worry about it having these sorts of standard rendering features to create the content I want in AR mode. I think some lower-level integration like compositorservices for full VR mode would be very useful to have for passthrough AR mode. Would Apple take feedback for or consider possibilities for lower-level Metal integration separate from RealityKit (via feedback assistant or discussion here?) I understand why the choice was probably made to use RealityKit: you want consistent visuals mixed with the real world camera view. That said, I think a lot of people including myself will still want custom renderer support in passthrough mode, even though it’s nice to have an all-in-one engine like RealityKit. The thing with builtin renderers is that they limit the creative choices each developer / artist can make, and content will tend to start looking the same between apps. Also, it’s probably going to be hard for Apple to keep updating RealityKit consistently as more and more advances in graphics come in. Usually we’d have the choice to write our own renderers it the builtin libraries didn’t suffice, so I‘m not sure RealityKit-only for passthrough mode is scalable. I’d be interested in sharing some more use cases for custom rendering and ideas for how to deal with the privacy / visual consistency issues, though I realize the teams at Apple have probably thought all of this through. A little tangential to my own question, but are custom materials available on VisionOS? That might make things a little easier. Overall, I think that as VisionOS develops, it will be great to see pure Metal support in passthrough mode, perhaps with some VisionOS extensions, rather than just RealityKit. Something in-between the high-level and the low level functionality to make it possible for graphics people to have a little more autonomity.
Jun ’23
Reply to People Occlusion + Scene Understanding on VisionOS
I’m concerned especially that: 1: passthrough mode requires RealityKit, which is very limiting. I want to do the work to create things using Metal. 2: VR mode doesn’t let you move around. 2 destroys most ideas for interestint VR use cases and could be solved by introeucing a user-defined safe boundary. (Not to compare with competition, but this is the standard solution.) 1 is a problem because it limits the creative style choicea in the design of content and prevents the programmer from using their own entine in mixed reality mode. RealityKit is great for getting started, but when you want more control, it can get in the way. I know Metal can be used to generate resources for RealityKit, but this is also super limited and a ton of friction for the programmer. RealityKit limits the flexibility in how data are represented, and maybe people don’t want the style of RealityKit visuals in mixed reality. To solve 1, you enable metal with passthrough mode. However, without depth and lighting data, I understand you can’t achieve realistic lighting mixed with the real world. Either you enable a permission system so people know the risks of granting camera data OR an easier in-between solution would be the following: I’ve thought about it a bit: Rather than granting pixel data, have a special visionOS Metal command that says to draw triangles with the passthrough drawn on top in screenspace, with occlusion, all handled behind the scenes by the system without the code ever touching the pixels. Use Apple-approved extension shaders for lighting. e.g. There’s research showing it’s useful to have portals in VR showing windows into the real world. Being able to specify a surface for the camera feed would enable this and many other use cases. The api would look like the old opengl 1.0 fixed function pipeline. The program provides parameters and the backend and compositor handle things without giving you the data directly. Like glFog: [commandBuffer enableXRPassthroughPipeline withOptions:OCCLUSION]; [renderEncoder drawWithXRPassthroughComposition: triangle buffer… withRaytracing:YES]; These would let the programmer specify what geometry should be composed with the passthrough backgrounds, enabling either full passthrough backgrounds or portals into the real world. With occlusion and lighting handled automatically. Disable occlusion tests when this is enabled. The compositor would do steps to ensure the passthrough pixels are separate from your render. And it could apply fixed function raytracing. I for one would prefer just implementing a robust permissions system, but this is the best I can think of otherwise. Overall, I think RealityKit is too high-level and a Metal-based solution allowing for passthrough, mobility, and some more control with a bit of fixed-functionalitt commands would work.
Jun ’23
Reply to structured logging is great, but logging fails with variables ?
Ah yes. That makes a lot more sense. I did finally discover the privacy tag in the documentation, but you’re right it’s pretty misleading not to see that marker. I’ll file a report as soon as I can. By the way, is there no way to set the default to public? I just wanted to replace 100s of printfs with os_log to take advantage of the new Xcode 15 console features. I was going to do a hack solution in which I preprocessed strings and replaced %s with %{public}s, #define MY_LOG(fmt_, …) \ os_log_info(OS_LOG_DEFAULT, gen_public_fmt(fmt_), __VA_ARGS__) but os_log asserts on non-constant format strings.
Jun ’23