We are implementing a CIImageProcessorKernel
that uses an MTLRenderCommandEncoder
to perform some mesh-based rendering into the output’s metalTexture
. This works on iOS, but crashes on macOS. This is because the usage of the texture does not specify renderTarget
on those devices—but not always. Sometimes the output’s texture can be used as renderTarget
, but sometimes not. It seems there are both kinds of textures in CIs internal texture cache, and which one is used depends on the order in which filters are executed.
So far we only observed this on macOS (on different Macs, even on M1 and macOS 12 Beta) but not on iOS (also not on an M1 iPad).
We would expect to always be able to use the output’s texture as render target so we can use it as a color attachment for the render pass.
Is there some way to configure a CIImageProcessorKernel
to always get renderTarget
output textures? Or do we really need to render into a temporary texture and blit the result into the output texture? This would be a huge waste of memory and time…