Bindless/GPU-Driven approach with dynamic scenes?

I have been experimenting with different rendering approaches in Metal and am hitting a wall when it comes to reconciling "bindless" or GPU-driven approaches* with a dynamic scene where meshes can be added, removed, and changed. All the examples I have found of such approaches use fixed scenes, where all the data is fixed before the first draw call into something like a MeshBuffer that holds all scene geometry in the form of Mesh objects (for instance).

While I can assume that recreating a MeshBuffer from scratch each frame would be possible but completely undesirable, and that there may be some clever tricks with pointers to update a MeshBuffer as needed, I would like to know if there is an established or optimal solution to this problem, or if these approaches are simply incompatible with dynamic geometry. Any example projects that do what I am asking that I may have missed would be appreciated, too.

* I know these are not the same, but seem to share some common characteristics, namely providing your entire geometry to the GPU at once. Looping over an array of meshes and calling drawIndexedPrimitives from the CPU does not post any such obstacles, but also precludes some of the benefits of offloading work to the GPU, or having access to all geometry on the GPU for things like path tracing.

I don't understand, if you are doing ray-tracing then there are BLAS and TLAS BVH structures that wrap rigid models, and allow the ray to quickly hit triangles and then resolve attributes. The only limitation there is that all positions must be in a single VB.

Yes, I understand that. The part I am stuck on is whether anything like this could be done with procedural geometry, e.g. chunks of terrain that may have a totally different number of vertices as the ones from previous frames. This would results in a vertices array/buffer with either some old vertices or bad memory, or constantly moving array elements around to ensure no blank spots and recreating the buffer, which sounds too expensive.

I can solve the problem with a models buffer instead of a vertices buffer, because I can designate the first n elements as representing chunks of terrain and update them as needed. But I can do no such thing, at least as far as I can tell, with a shared vertex buffer.

Thanks for your response and I would appreciate any other thoughts. I am indeed a beginner but using Metal has been a treat.

I am not quite sure what exactly the problem is that you are trying to solve, hence it’s difficult to give recommendations. Generally, if you have truly dynamic geometry (which is kind of difficult for me to imagine - why would your geometry change that radically every frame?), you can either compute new vertex data on the CPU (I think Apple recommends tripple buffering) and send the appropriate draw call, or use mesh shaders and generate the geometry directly on the GPU.

Bindless/GPU-Driven approach with dynamic scenes?
 
 
Q