Post

Replies

Boosts

Views

Activity

Reply to CAMetalLayer.nextDrawable() is very time-consuming, take 5ms-12ms
How did you measure that? nextDrawable() may block if previously requested drawable are still being used, which means that you're asking too much from the GPU and it's not able to give back the drawables quickly enough. You should first profile with Metal System Trace in Instruments: https://developer.apple.com/documentation/metal/performance_tuning/using_metal_system_trace_in_instruments_to_profile_your_app This will show when nextDrawable is blocked.
Dec ’22
Reply to Draw `MTLTexture` to `CAMetalLayer`
1 CAMetalLayer provides MTLTexture objects through -[CAMetalLayer nextDrawable] as you noticed. By default these MTLTexture are created with MTLTextureUsageRenderTarget only as usage flag, so you can only write to them through a render pipeline. This might be what you need anyway as this allows easily applying some transform to your input texture while rendering it into the drawable texture. In case you can guarantee that your RGBA data width and height will always match the size of the drawable texture, and you don't need to apply any transformation, things can be a bit simpler: you can set CAMetalLayer.framebufferOnly property to false so that the provided textures also become writable, which means that you can copy from your own (with shared or managed MTLStorageMode) MTLTexture to the drawable texture with a simple blit command instead of render command, which would remove the need for a custom shader. So the pipeline would look as follow: At init: create 3 staging textures with shared/managed storage mode At each frame: pick current staging texture and fill it with the contents rendered by Skia ask for the drawable texture schedule blit from your staging texture to your drawable texture commit increase staging texture index One thing I don't know is the MTLStorageMode of the drawable texture: in case it's already shared/managed, you don't need the staging texture and can directly fill the drawable texture instead of going through a staging texture fill + blit command. Bonus: as an additional optimization and in case drawable texture isn't shared/managed, if you detect that your MTLDevice is an Apple one, you don't need to render with Skia to some buffer before filling the MTLTexture: you can render with Skia directly into the MTLTexture. To do that: create MTLBuffer objects of the appropriate size with shared storage mode, use -[MTLBuffer newTextureWithDescriptor:offset:bytesPerRow:] to create each staging texture from them. Now if you render with Skia into MTLBuffer.contents, this will be directly available when you use the MTLTexture for your render or blit command. Just don't forget to use -[MTLBuffer didModifyRange:] after writing to MTLBuffer.contents. 2 Just from above sample code I don't know, because the only shown render pass is using drawable.texture, and the drawable would already properly configure the texture. 0x01 is MTLTextureUsageShaderRead so you might have another render pipeline somewhere trying to render in a texture with that usage. When you get this error, I assume that the debugger shows the callstack for the command buffer being submitted that has this issue. 3 I suppose that you mean "when is the GPU done with using the texture that was filled with data from Skia" ? In that case the answer is: when the command buffer that references that texture has completed (you can know it through -[MTLCommandBuffer addCompletedHandler:]). Before that it's not safe to fill again the texture with new data. That's why in above pipeline I'm using 3 staging textures: this way you can continue filling textures while the GPU is using the previously filled ones. I picked number 3 because that's the maximum for CAMetalerLayer.maximumDrawableCount, so you can't have more than 3 staging textures needed at a time. By the way this means that your current usage of _textureMutex is not useful as is and also doesn't prevent the GPU from reading the texture while you write to it for next frame scheduling.
Dec ’22
Reply to DRHT > error > MTLTextureDescriptor
Look for "Maximum 2D texture width and height" in https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf You can check the GPUFamily through MTLDevice API. As for how to workaround this limitation, I guess you only have two choices: use a smaller texture size use several textures and display them next to each other like tiles
Dec ’22
Reply to Using Texture in Vertex Shader Error
How did you create the MTLTexture object? It says that it has a null depth. A 2D texture is expected to have a depth of 1. See https://developer.apple.com/documentation/metal/mtltexturedescriptor/1516298-depth However I would expect this kind of error to be caught by Metal API Validation, did you disable it? https://developer.apple.com/documentation/metal/diagnosing_metal_programming_issues_early
Oct ’22
Reply to WKWebView offscreen rendering
I'm not sure everything will help, and I don't know for the WebKit offscreen rendering, but here are at least three points I can mention: Don't take iPhone simulator as a reference for your benchmark, use a real device Being on iPhone you can take advantage of the unified memory architecture and create textures without doing any copy, if the source data is properly allocated and aligned. In particular see https://developer.apple.com/documentation/metal/mtldevice/1433382-makebuffer and https://developer.apple.com/documentation/metal/mtlbuffer/1613852-maketexture. This means that the CGImage buffers in which you render to must have been allocated by you, following above constraints, and that the CGImage must only wrap your pointers, not copy your data to its own buffers (I'm not sure if CGImage can do that, so you might need to render into something else than a CGImage). If the size of the texture doesn't change, you can reuse the texture but make sure it's not used by Metal while you write to it: either you wait for MTLCommandBuffer to complete, or you create several buffers/textures that you reuse over time to account for triple buffering of your rendering.
Jul ’22
Reply to Execution time profiling of Metal compute kernels.
For profiling of your GPU pipeline, you have Metal System Trace in Instruments: https://developer.apple.com/documentation/metal/performance_tuning/using_metal_system_trace_in_instruments_to_profile_your_app For profiling of the shaders themselves, along with metrics about what is limiting their speed, you'll want to use GPU frame capture in Xcode: https://developer.apple.com/documentation/metal/debugging_tools Note that GPU frame capture can be triggered manually from Xcode when you have frames displayed, but in your case you can also use MTLCaptureManager in your code to start & stop this capture around your compute workload. So no need to have a graphic pipeline to use these tools.
Jun ’22
Reply to Metal performance compared to OpenCL
How did you dispatch the work in host code? Especially regarding the threads per threadgroup. You may want to check https://developer.apple.com/documentation/metal/calculating_threadgroup_and_grid_sizes This can make a big difference in efficiency. Apart from that, as Etresoft already mentionned, you should check the performance data provided by GPU Frame Capture.
Nov ’20
Reply to Number of simultaneous Metal threads
Did you check https://developer.apple.com/documentation/metal/calculating_threadgroup_and_grid_sizes ? Especially the part with « You calculate the number of threads per threadgroup based on two MTLComputePipelineState properties. One property is maxTotalThreadsPerThreadgroup (the maximum number of threads that can be in a single threadgroup). The other is threadExecutionWidth (the number of threads scheduled to execute in parallel on the GPU). » Looks like these properties would help.
Nov ’20