Using the same texture for both input & output of a fragment shader

Hello,

This exact question was already asked in this forum (8 years ago) but I can't find a definitive answer:

Does Metal allow using the same color texture as both an input and output (color attachment) of a fragment shader? Is the behavior defined somewhere?

I believe this results in undefined behavior under both DirectX and OpenGL, so I'd assume the same for Metal, but then why doesn't Metal warn me about this as it does on some many other "misconfigurations"? It also seems to work correctly in my case, as I found out by accident.

Would love to get a clarification!

Thanks ahead!

Answered by endecotp in 811761022

What I'm trying to do (simplifying a bit) is blend multiple color layers using my own blending functions into a "master" texture.

So what we have is a fragment shader taking in a bunch of inputs, among them the master texture for reading, and outputting to that same master texture as an output color attachment.

You can do your own blending in a fragment shader without this complexity. A fragment shader can read the current value of the framebuffer ("color attachment") pixel without having to make it an input texture (on iOS). In OpenGL ES, this is defined by GL_EXT_shader_framebuffer_fetch (https://registry.khronos.org/OpenGL/extensions/EXT/EXT_shader_framebuffer_fetch.txt). In metal, I believe that you need to add [[color(0)]] to a fragment shader input - see table 5.5 of the spec at https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf - but I've never tried this myself.

The accepted answer didn't suffice?

Are you also working with a depth peeling algorithm or is this a more general concern?

What are you trying to do?

Depends on platform. GL allows reading/writing from the same texture. ES on iOS (and some Android) can also read from a buffer that's bound. Metal has the same functionality, but not AFAIK on macO Intel, only on iOS and Apple SIlicon.

You can always ping-pong, but that may be costly. It's not that hard for the TBDR to read the pixels in the 32x32 tile, but there are a lot of ops being processed, and it can really only provide the color outside of the render pass. There are though ways to read the subpixels and pixels of a tile.

Thank you both for your answers, however, I'm afraid they're assuming knowledge I'm lacking. This is my first foray into graphics programming so I suffer from the not knowing what I don't know syndrome. Let me make it clear what I'm trying to do:

The setup: I'm working on a 2D graphics app targeting only iOS devices, using the latest Metal. Probably should have mentioned that earlier!

What I'm trying to do (simplifying a bit) is blend multiple color layers using my own blending functions into a "master" texture. The implementation is a bunch of single-quad render passes, one per layer, where I use a fragment shader to blend the master texture (initially cleared) with the current layer. I output the result to the same master texture. Think reduce(blend, layers).

So what we have is a fragment shader taking in a bunch of inputs, among them the master texture for reading, and outputting to that same master texture as an output color attachment. It only ever reads and writes to the same pixel location (so definitely same tile, IIUC).

I realize I could avoid this by ping-ponging between 2 master textures, but that's exactly my question: am I supposed to? Why? Would it be more efficient for some reason? I'm trying to learn here :)

Thanks ahead!

Accepted Answer

What I'm trying to do (simplifying a bit) is blend multiple color layers using my own blending functions into a "master" texture.

So what we have is a fragment shader taking in a bunch of inputs, among them the master texture for reading, and outputting to that same master texture as an output color attachment.

You can do your own blending in a fragment shader without this complexity. A fragment shader can read the current value of the framebuffer ("color attachment") pixel without having to make it an input texture (on iOS). In OpenGL ES, this is defined by GL_EXT_shader_framebuffer_fetch (https://registry.khronos.org/OpenGL/extensions/EXT/EXT_shader_framebuffer_fetch.txt). In metal, I believe that you need to add [[color(0)]] to a fragment shader input - see table 5.5 of the spec at https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf - but I've never tried this myself.

Thank you @endecotp! That is great to know, and seems to work on my iPad, though it fails on the iOS simulator (crashes when creating the pipeline with "reading from a rendertarget is not supported").

I will definitely be using this technique :)

Still curious about my original question though, just for learning purposes, and if perhaps I'd like to read adjacent pixels...

it fails on the iOS simulator

Is that on an Intel or an ARM Mac ?

Is that on an Intel or an ARM Mac ?

ARM. M1 MacBook Air. I found other people who ran into this iOS Simulator limitation here.

Oops, double post, ignore this reply which I can't delete...

Using the same texture for both input & output of a fragment shader
 
 
Q