Using MPSTemporaryImage in render and command encoders, fences needed?

Hi

A question regarding resource tracking.


Consider the case where you have a single command buffer, with an MPSUnaryImageKernel that writes to an MPSTemporaryImage, further down the command buffer there is a MTLRenderCommandEncoder setting the abovementioned MPSTemporaryImage.texture as a texture in a fragment shader.


Do you need to set a fence between the encoding of the MPS kernel and the render pass?


What about using the MPSTemporaryImage in a compute kernel? do you need fences here? what about chained MPSUnaryImageKernels using each other's output as input?