Post

Replies

Boosts

Views

Activity

Reply to Attached USB device (LCD) frame buffer transfer and GUI HOWTO?
Hi Thierry! Technically you have unlimited options to render your contents to... CoreGraphics seems like a good one to start with. You'd need double or triple-buffers drawables anyway to be able to pipeline submissions and "freeze" the buffers on CPU while sending-over-usb is happening. Also, make sure bandwidth is enough for your case with the USB bandwidth you have. Really, start with a framework which will give you CPU-visible linear pixels representation in the most straightforward way. I can't recommend you any, since it depends on what you are used to. Regards, Eugene.
Jul ’21
Reply to iOS Simulator running metal hangs entire OS when using discrete GPU
Hi Aaargh! I am glad you are having a workaround at least. Can you confirm the OS you are running at? AMD drivers and compiler are part of the macOS stack, so there is a chance the issue is already fixed in macOS12 beta. Simple reproducer would help localizing the issue and providing a fix, otherwise we can't do anything about it. I hope you could provide a reproducer and a FB ticket number. Thanks!
Jun ’21
Reply to Using threadgroup memory for image convolution
Hi jcookie! When using threadgroup memory in your compute kernel, you basically use the same "local" memory as ImageBlock uses. That was exactly what Harsh has mentioned - you can explicitly use TileMemory by declaring threadgroup memory allocation. In other APIs, these type of memory is called "shared" or "local", for your reference. Below is basic compute example (with no threadgroup usage, but you can get the idea): https://developer.apple.com/documentation/metal/processing_a_texture_in_a_compute_function?language=objc I can't immediately find an example on official Apple Developer website. The idea is you first bring texture/buffer data to threadgroup memory in your shader, then you do all the calculations ONLY on this local threadgroup memory. Since this memory is much faster (though banks should be kept in mind), you can pack more ALUs and do more work while waiting less for the memory. In the end of the computation, you write (flush) threadgroup memory to device memory. That was what Harsh called "flush" and you should do it in the compute kernel yourself.
Jun ’21