I developed a small drawing application for the iPad in Swift using Metal. It supports drawing Lines with the Apple Pencil, erase them, select them and so on.
Everything is drawn onto an offscreen texture so I don't need to redraw all of them. The App also supports zooming and panning. If you zoom you have to redraw everything. And there begins the problem. At the moment if I pan my canvas I simply draw the offscreen texture at zoom level 1.0 and pan it around. So it looks very pixelated because I zoom into the offscreen texture without redrawing the offscreen texture itself. The problem is I can't redraw the whole offscreen texture every frame while panning because for performance reasons.
My idea was to implement a tile based progressive rendering technique. I thought I could split up the screen into tiles with size of for example 256x256 pixels and if I pan I can move those tiles and only the new visible ones I need to render. But I don't really know where to start. I do not know how to render the lines onto those tiles. every line has a single VertexBuffer at the moment storing the triangles. I thought maybe you can use multiple color Attachments for The Render Encoder to draw the lines. So every colorAttachements[n].texture
would be a tile. And maybe through the help of Viewport Culling? I don't have much experience in this area so I have no idea.
I also found the Sparse Texture feature, which looked like it would solve my problem. But I need to support iPads that do not support this feature. Has someone else made something similar ? Or has any example code for this? Or has a complete different Idea that could help?
Thank you very much for your help!
> There are about 16651560 Vertices on the screen.
In that case, I suggest that you do some pre-processing on the CPU:
-
When you're zoomed out and the entire image is visible, use Douglas-Peucker line simplification, or similar, to reduce the number of vertices.
-
When you're zoomed in, use some sort of spatial index to send only the vertices that are on the screen. (I.e. "tile" your input vertices, not your output.)
You'll end up with some sort of multi-resolution data structure.
Note it's easy to over-engineer this (I've done that). The GPU is fast. You only need to do some crude CPU pre-processing to get the number of vertices down to something tractable.
Also: is the very large number of vertices because you want smooth curves made from very short straight segments? If so, consider storing fewer vertices and using the GPU to draw the curves; search for "GPU quadratic bezier".