How to achieve bindless textures and dynamic indexing to pick and sample textures in shaders?

Hello all. I'm developing a user-generated content 2D drawing application for iOS (iPad) that requires a lot of dynamic drawing, texture swaps, and all-around changing data, meaning that I can't allocate most of my resources up-front.

The general idea of what I'm trying to do:


One thing I'd like to do is support lots of texture loading and swapping at runtime, using images fetched on-the-fly. I would prefer not to set textures over and over using setFragmentTexture to avoid all of the extra validation.

Here is what I've tried and thought of doing so far, for some context:


A way of doing what I want, I think, might involve creating a large array of texture2D representing an active set of resident textures. Each object in my world should have an associated index set into a subset of textures into that array (so I could support sprite animation for example). Then, in the shader, I could use that index to access the correct texture and sample from it.

Is dynamic indexing to sample a texture even allowed?


There's a problem, however. Everything I've learned about other graphics APIs and Metal suggests that this sort of dynamic indexing might be illegal unless the index is the same for all invocations of the shader within a draw call. In Vulkan, this is called "dynamically uniform," though there's an extension that loosens this constraint, apparently. I think what I'm trying to achieve is called "mindless textures" or a form of "descriptor indexing."

Potentially use Argument Buffers?


So I looked into Argument Buffers here, which seem to support more efficient texture-setting that avoids a lot of the overhead of validation by packing everything together. From my understanding, it might also relax some constraints. I'm looking at this example: using_argument_buffers_with_resource_heaps
The example uses arrays of textures as well as heaps, but you can toggle-off the heaps (so let's ignore them for now).
Assuming I have tier-2 iOS devices (e.g. iPad mini 2019, iPad Pro 2020), there's a lot I can do with these.

If I used Argument Buffers, how would I support dynamically-added, removed, and swapped resources?


Still, there's another problem: none of the examples show how to modify which textures are set in the argument buffer. Understandably, those buffers were optimized for set-once, use every frame use cases, but I want to be able to change the contents during my draw loop in case I run out of texture slots, or maybe I want to delete a texture outright, as this is a dynamic program after all.

On to the concrete questions:

Question 1

how do I best support this sort of mindless texturing that I'm looking to do?

Should I still use argument buffers, in which case, how do I use them efficiently considering I may have to change their contents. How *do* I change their contents? Would I just create a new argument buffer encoder and set one of the texture slots to something else for example, or would I need to re-encode the entire thing? What are the costs?
May I have clarification and maybe some basic example code? The official documentation, as I said, does not cover my use case.

Question 2:

is it even legal to access textures from a texture array using a dynamic index provided in an object-specific uniform buffer? Basically it would be the equivalent of a "material index."

Last Question:

I think I also have the option of accessing the texture from the vertex shader and passing it to the fragment shader as an output. I hadn't considered doing that before, but is there a reason this may be useful? For example, might there be some restrictions lifted if I did things this way? I can imagine that because there are far more vertices than fragments, this would reduce the number of accesses into my "large buffer" of resident textures.

Thanks for your time. I'm looking forward to discussing and getting to the bottom of this!
Have you considered creating one or more texture atlass of the maximum texture size, then using blit commands to replace regions as needed? On newer devices you might also benefit from sparse textures.
Yes, actually, but I feel that's really complex to do right now, and it looks like there's no code example for sparse textures at all. For example, how would I chunk textures the way the example figure shows it, and how do sparse textures actually work? Are textures partly deleted when not in use? Also, I am using an iPad Pro 2020, which sadly still has an A12 chip instead of an A13 or A14 (I wonder if they'll do yet another refresh soon...)

Atlases are more doable, but packing them efficiently is non-trivial. When you say replace regions, do you mean I'd load separate textures and blit those textures to a main atlas? Don't I need a render pass per region bit in case I need to use that memory again for something else? In short, when am I allowed to replace regions of a texture so as not to clobber texture data required in a previous draw call?

I actually tried the argument buffer idea and it *seems* to work, but I think that you're correct that texture streaming is closer to what I'd actually want to do.
How to achieve bindless textures and dynamic indexing to pick and sample textures in shaders?
 
 
Q