iOS 13, iPad Pro now says hardware does not support read-write texture?

I have some code that used to run on my iPad Pro. Today I compiled it for iOS 13, with Xcode 11, and I get errors like this:


validateComputeFunctionArguments:834: failed assertion `Compute Function(merge_layer):

Shader uses texture(outTexture[1]) as read-write, but hardware does not support read-write texture of this pixel format.'


The pixel format is showing as `MTLPixelFormatBGRA8Unorm`. That's what I expected.


The debugger says the device has no support for writeable textures.


(lldb) p device.readWriteTextureSupport

(MTLReadWriteTextureTier) $R25 = tierNone


Did some devices lose support for texture writing in iOS 13?


Rob

Accepted Reply

According to developer tech support, the current behavior is correct. Some of these devices, like 2nd generation iPad Pro, don't support read-write textures. I was a little confused that they wouldn't admit that it was behaving differently before, but I don't have time to do all the extra work to prove that to them.


You can see the "function texture read-write" feature in the Metal feature table here: https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf


You can detect the function texture read-write support like this:


if #available(iOS 11, *) {
    if self.device.supportsFamily(.common3) || self.device.supportsFamily(.apple4) {
        print("supported")
    } else {
        print("not supported")
    }
} else {
    print("Needs iOS 11 or higher")
}

Replies

Same thing here. But it has to do with read_write support, write is supported. Maybe before there has been no warning and read_write was interpreted as just write in cases where this was possible. But I think I used a texture in read_write mode and actually read and wrote to it in a compute kernel on my IPad Pro (1st Gen.) on IOS 12. So I am confused about what is going on here.


BTW: Harware is IPad Pro (1st Gen.) on IPadOS 13.1.2, using XCode 11.2 Beta.

I am seeing the same issue on my MBP with a single internal card ( Intel(R) Iris(TM) Plus Graphics 640) . Read-Write with RGBAFloat32 pixel format was working on MacOS Mojave and Xcode 10.3. But since upgrading to Catalina and Xcode 11 I am getting the following error: "Shader uses texture(output[0]) as read-write, but hardware does not support read-write texture of this pixel format." Looks like a regression to me.

I'm experimenting with a possible work around: pass the texture in twice, once as 'read' and once as 'write'. So my shader function looks like this now:


kernel void

merge_layer(texture2d<half, access::read> inTexture [[texture(0)]],

texture2d<half, access::read> outTexture [[texture(1)]],

texture2d<half, access::write> outTexture2 [[texture(2)]],

uint2 gid [[thread_position_in_grid]])


Swift code passes the same texture for 1 and 2 ...


let cenc = commandBuffer.makeComputeCommandEncoder()!

...

cenc.setTexture(layerTexture, index: 1)

cenc.setTexture(layerTexture, index: 2)


It seems to work so far, but I haven't tested it much. It makes me nervous. Don't know if that kind of memory aliasing will cause problems later.

From https://developer.apple.com/library/archive/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/WhatsNewiniOS10tvOS10andOSX1012/WhatsNewiniOS10tvOS10andOSX1012.html :

"Note: It is invalid to declare two separate texture arguments (one read, one write) in a function signature and then set the same texture for both."


You may have issues especially regarding synchronization with read/write between different kernel threads. I'm surprised that the Metal API validation isn't complaining.

Unless you can make sure that all the hardware / OS you want to support have read-write texture support, you need to provide an implementation that doesn't need this read-write support.

Thanks, I hadn't seen that. I guess I'll pay for a tech support request and see if they can explain what's going on. I will post back here if I get an answer.

According to developer tech support, the current behavior is correct. Some of these devices, like 2nd generation iPad Pro, don't support read-write textures. I was a little confused that they wouldn't admit that it was behaving differently before, but I don't have time to do all the extra work to prove that to them.


You can see the "function texture read-write" feature in the Metal feature table here: https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf


You can detect the function texture read-write support like this:


if #available(iOS 11, *) {
    if self.device.supportsFamily(.common3) || self.device.supportsFamily(.apple4) {
        print("supported")
    } else {
        print("not supported")
    }
} else {
    print("Needs iOS 11 or higher")
}

Hi rnikander,


We're not sure why you're confused as there is nothing here to admit. You've claimed to have used the feature on a device that doesn't support it yet have never provided any actual evidence of that. That said the burden of proof remains unambiguosly yours.


Otherwise, thank you for sharing this information.


Here is a general article on the topic that all Metal developers should find helpful.

"Detecting GPU Features and Metal Software Versions" - https://developer.apple.com/documentation/metal/mtldevice/detecting_gpu_features_and_metal_software_versions

Hi 4k4,


Here’s why I was confused. I gave Apple sample code that produces the error on iOS 13. Apple could run it on iOS 11 (maybe compiled with Xcode 9), on the device in question (iPad Pro gen 2), and get the proof (for or against), but they didn’t do that. Or at least, they didn’t tell me they did.


I imagined it was relatively easy for Apple to do such a thing. For me as a solo developer it’s certainly not easy; I’d have to wipe my only device or buy another, download old Xcodes. Maybe I assumed incorrectly that Apple would be set up for running such a test, and that 3 people here saying they have a similar problem would be motive enough to do it. But if that's not the case, then I totally understand why neither I nor Apple want to spend time testing this!


It's not a big deal, since we see how it works now and can move forward. I'm just curious what happened.


take care,

R

Let me add that we have run into exactly the same situation with a number of compute shaders that use read/write textures where it seems to work just fine on hardware where in fact it is not officially supported (both Mac and iOS devices). Can only guess that it will depend on the exact code whether it actually works or not

This is happening in Apple's own terrain sample code on macOS on the latest 16" Intel MBP with AMD 5500m. The Metal texture loader somehow loads an L16 png into an RG16Unorm texture since it provides no control over the MTLPixelFormat. Then when you click to modify the terrain with the mouse, the app crashes in the validation.

The textures say they support All, function texture read-write is true, and readWriteTexture support is Tier2. If Apple can't write a correct example that works, then we probably can't either.

MTLTextureDescriptor *texDesc =       [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:         MTLPixelFormatRG16Unorm         //MTLPixelFormatR16Unorm <- should be this                                 width:heightMapWidth                                height:heightMapHeight                               mipmapped:NO];      2021-08-18 22:43:41.385581-0700 DynamicTerrainWithArgumentBuffers[46069:3553941] sample running on: AMD Radeon Pro 5500M

validateComputeFunctionArguments:854: failed assertion `Compute Function(TerrainKnl_UpdateHeightmap): Shader uses texture(heightMap[0]) as read-write, but hardware does not support read-write texture of this pixel format.'

validateComputeFunctionArguments:854: failed assertion `Compute Function(TerrainKnl_UpdateHeightmap): Shader uses texture(heightMap[0]) as read-write, but hardware does not support read-write texture of this pixel format.'

Seems to be a bad bug in the Metal validation layer. It flags this texture RG16Unorm as unsupported, but it is supported. Turning off Metal validation for me fixes that sample app.

We have posted a fix to the sample which corrects the behavior on AMD GPUs and works with the validation layer. It handles RG16 and R16 textures correctly as well. The sample now uses R32 which is a supported Tier1 format. Technically, R16Uint is supported but not R16Unorm. Although the sample worked without the validation layer turned on, it was relying on undefined behavior, so we corrected it to align with our documentation.