Post

Replies

Boosts

Views

Activity

Reply to How to choose a Mac for CoreML work?
It depends on the complexity of the model you want to run or train. If you're only doing light work, any Mac will do. If your model is heavy or processing a lot of data, then AMD GPUs (or soon Neural Engine) are significantly more efficient, so in that case avoid Macs with only Intel graphics.
Oct ’20
Reply to Number of simultaneous Metal threads
Did you check https://developer.apple.com/documentation/metal/calculating_threadgroup_and_grid_sizes ? Especially the part with « You calculate the number of threads per threadgroup based on two MTLComputePipelineState properties. One property is maxTotalThreadsPerThreadgroup (the maximum number of threads that can be in a single threadgroup). The other is threadExecutionWidth (the number of threads scheduled to execute in parallel on the GPU). » Looks like these properties would help.
Nov ’20
Reply to Metal performance compared to OpenCL
How did you dispatch the work in host code? Especially regarding the threads per threadgroup. You may want to check https://developer.apple.com/documentation/metal/calculating_threadgroup_and_grid_sizes This can make a big difference in efficiency. Apart from that, as Etresoft already mentionned, you should check the performance data provided by GPU Frame Capture.
Nov ’20
Reply to Execution time profiling of Metal compute kernels.
For profiling of your GPU pipeline, you have Metal System Trace in Instruments: https://developer.apple.com/documentation/metal/performance_tuning/using_metal_system_trace_in_instruments_to_profile_your_app For profiling of the shaders themselves, along with metrics about what is limiting their speed, you'll want to use GPU frame capture in Xcode: https://developer.apple.com/documentation/metal/debugging_tools Note that GPU frame capture can be triggered manually from Xcode when you have frames displayed, but in your case you can also use MTLCaptureManager in your code to start & stop this capture around your compute workload. So no need to have a graphic pipeline to use these tools.
Jun ’22
Reply to WKWebView offscreen rendering
I'm not sure everything will help, and I don't know for the WebKit offscreen rendering, but here are at least three points I can mention: Don't take iPhone simulator as a reference for your benchmark, use a real device Being on iPhone you can take advantage of the unified memory architecture and create textures without doing any copy, if the source data is properly allocated and aligned. In particular see https://developer.apple.com/documentation/metal/mtldevice/1433382-makebuffer and https://developer.apple.com/documentation/metal/mtlbuffer/1613852-maketexture. This means that the CGImage buffers in which you render to must have been allocated by you, following above constraints, and that the CGImage must only wrap your pointers, not copy your data to its own buffers (I'm not sure if CGImage can do that, so you might need to render into something else than a CGImage). If the size of the texture doesn't change, you can reuse the texture but make sure it's not used by Metal while you write to it: either you wait for MTLCommandBuffer to complete, or you create several buffers/textures that you reuse over time to account for triple buffering of your rendering.
Jul ’22