Same thing here. But it has to do with read_write support, write is supported. Maybe before there has been no warning and read_write was interpreted as just write in cases where this was possible. But I think I used a texture in read_write mode and actually read and wrote to it in a compute kernel on my IPad Pro (1st Gen.) on IOS 12. So I am confused about what is going on here.
BTW: Harware is IPad Pro (1st Gen.) on IPadOS 13.1.2, using XCode 11.2 Beta.
I am seeing the same issue on my MBP with a single internal card ( Intel(R) Iris(TM) Plus Graphics 640) . Read-Write with RGBAFloat32 pixel format was working on MacOS Mojave and Xcode 10.3. But since upgrading to Catalina and Xcode 11 I am getting the following error: "Shader uses texture(output) as read-write, but hardware does not support read-write texture of this pixel format." Looks like a regression to me.
I'm experimenting with a possible work around: pass the texture in twice, once as 'read' and once as 'write'. So my shader function looks like this now:
merge_layer(texture2d<half, access::read> inTexture [[texture(0)]],
texture2d<half, access::read> outTexture [[texture(1)]],
texture2d<half, access::write> outTexture2 [[texture(2)]],
uint2 gid [[thread_position_in_grid]])
Swift code passes the same texture for 1 and 2 ...
let cenc = commandBuffer.makeComputeCommandEncoder()!
cenc.setTexture(layerTexture, index: 1)
cenc.setTexture(layerTexture, index: 2)
It seems to work so far, but I haven't tested it much. It makes me nervous. Don't know if that kind of memory aliasing will cause problems later.