How did you measure that? nextDrawable() may block if previously requested drawable are still being used, which means that you're asking too much from the GPU and it's not able to give back the drawables quickly enough.
You should first profile with Metal System Trace in Instruments: https://developer.apple.com/documentation/metal/performance_tuning/using_metal_system_trace_in_instruments_to_profile_your_app
This will show when nextDrawable is blocked.
Post
Replies
Boosts
Views
Activity
Do your compute commands depend on each other?
What if you set untracked MTLHazardTrackingMode for the resources bound to your compute commands? It'll probably give incorrect output but just to see if they get executed in parallel.
1
CAMetalLayer provides MTLTexture objects through -[CAMetalLayer nextDrawable] as you noticed. By default these MTLTexture are created with MTLTextureUsageRenderTarget only as usage flag, so you can only write to them through a render pipeline. This might be what you need anyway as this allows easily applying some transform to your input texture while rendering it into the drawable texture.
In case you can guarantee that your RGBA data width and height will always match the size of the drawable texture, and you don't need to apply any transformation, things can be a bit simpler: you can set CAMetalLayer.framebufferOnly property to false so that the provided textures also become writable, which means that you can copy from your own (with shared or managed MTLStorageMode) MTLTexture to the drawable texture with a simple blit command instead of render command, which would remove the need for a custom shader. So the pipeline would look as follow:
At init:
create 3 staging textures with shared/managed storage mode
At each frame:
pick current staging texture and fill it with the contents rendered by Skia
ask for the drawable texture
schedule blit from your staging texture to your drawable texture
commit
increase staging texture index
One thing I don't know is the MTLStorageMode of the drawable texture: in case it's already shared/managed, you don't need the staging texture and can directly fill the drawable texture instead of going through a staging texture fill + blit command.
Bonus: as an additional optimization and in case drawable texture isn't shared/managed, if you detect that your MTLDevice is an Apple one, you don't need to render with Skia to some buffer before filling the MTLTexture: you can render with Skia directly into the MTLTexture. To do that: create MTLBuffer objects of the appropriate size with shared storage mode, use -[MTLBuffer newTextureWithDescriptor:offset:bytesPerRow:] to create each staging texture from them. Now if you render with Skia into MTLBuffer.contents, this will be directly available when you use the MTLTexture for your render or blit command. Just don't forget to use -[MTLBuffer didModifyRange:] after writing to MTLBuffer.contents.
2
Just from above sample code I don't know, because the only shown render pass is using drawable.texture, and the drawable would already properly configure the texture. 0x01 is MTLTextureUsageShaderRead so you might have another render pipeline somewhere trying to render in a texture with that usage. When you get this error, I assume that the debugger shows the callstack for the command buffer being submitted that has this issue.
3
I suppose that you mean "when is the GPU done with using the texture that was filled with data from Skia" ? In that case the answer is: when the command buffer that references that texture has completed (you can know it through -[MTLCommandBuffer addCompletedHandler:]). Before that it's not safe to fill again the texture with new data. That's why in above pipeline I'm using 3 staging textures: this way you can continue filling textures while the GPU is using the previously filled ones. I picked number 3 because that's the maximum for CAMetalerLayer.maximumDrawableCount, so you can't have more than 3 staging textures needed at a time.
By the way this means that your current usage of _textureMutex is not useful as is and also doesn't prevent the GPU from reading the texture while you write to it for next frame scheduling.
Look for "Maximum 2D texture width and height" in https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf
You can check the GPUFamily through MTLDevice API.
As for how to workaround this limitation, I guess you only have two choices:
use a smaller texture size
use several textures and display them next to each other like tiles
How did you create the MTLTexture object? It says that it has a null depth. A 2D texture is expected to have a depth of 1.
See https://developer.apple.com/documentation/metal/mtltexturedescriptor/1516298-depth
However I would expect this kind of error to be caught by Metal API Validation, did you disable it?
https://developer.apple.com/documentation/metal/diagnosing_metal_programming_issues_early
I'm not sure everything will help, and I don't know for the WebKit offscreen rendering, but here are at least three points I can mention:
Don't take iPhone simulator as a reference for your benchmark, use a real device
Being on iPhone you can take advantage of the unified memory architecture and create textures without doing any copy, if the source data is properly allocated and aligned. In particular see https://developer.apple.com/documentation/metal/mtldevice/1433382-makebuffer and https://developer.apple.com/documentation/metal/mtlbuffer/1613852-maketexture. This means that the CGImage buffers in which you render to must have been allocated by you, following above constraints, and that the CGImage must only wrap your pointers, not copy your data to its own buffers (I'm not sure if CGImage can do that, so you might need to render into something else than a CGImage).
If the size of the texture doesn't change, you can reuse the texture but make sure it's not used by Metal while you write to it: either you wait for MTLCommandBuffer to complete, or you create several buffers/textures that you reuse over time to account for triple buffering of your rendering.
For profiling of your GPU pipeline, you have Metal System Trace in Instruments: https://developer.apple.com/documentation/metal/performance_tuning/using_metal_system_trace_in_instruments_to_profile_your_app
For profiling of the shaders themselves, along with metrics about what is limiting their speed, you'll want to use GPU frame capture in Xcode: https://developer.apple.com/documentation/metal/debugging_tools
Note that GPU frame capture can be triggered manually from Xcode when you have frames displayed, but in your case you can also use MTLCaptureManager in your code to start & stop this capture around your compute workload. So no need to have a graphic pipeline to use these tools.
You can use Metal System Trace template in Instruments to check if there's any activity on GPU from your process or any other application. Then you can confirm whether lost GPU power comes from your compute kernels or not.
I ran into the same issue. And at least for the purpose of getting the view size, I'm relying on a parent GeometryReader instead, see https://github.com/Ceylo/FurAffinityApp/blob/main/FurAffinity/Helper%20Views/TextView.swift#L37
https://developer.apple.com/documentation/metal/mtlbuffer/1515373-length
https://developer.apple.com/documentation/metal/mtlresource/2915287-allocatedsize
I'd say what matters is the allocated size, and I suppose that it could be bigger that the requested length to satisfy alignment constraints.
How did you dispatch the work in host code?
Especially regarding the threads per threadgroup. You may want to check https://developer.apple.com/documentation/metal/calculating_threadgroup_and_grid_sizes
This can make a big difference in efficiency.
Apart from that, as Etresoft already mentionned, you should check the performance data provided by GPU Frame Capture.
Did you check https://developer.apple.com/documentation/metal/calculating_threadgroup_and_grid_sizes ?
Especially the part with
« You calculate the number of threads per threadgroup based on two MTLComputePipelineState properties. One property is maxTotalThreadsPerThreadgroup (the maximum number of threads that can be in a single threadgroup). The other is threadExecutionWidth (the number of threads scheduled to execute in parallel on the GPU). »
Looks like these properties would help.
We only know about announced OS. So you know that you're ok at least until iOS 15/macOS 12 are released Fall 2021. And you'll know about these next June.
Did you check https://developer.apple.com/documentation/metal/synchronization/synchronizing_events_between_a_gpu_and_the_cpu ?
In your simplified example, did adding an autorelease pool fixed it? Just to know if it's the culprit.