Hi pgelvinaz!
Do you observe that on iPadOS15 beta? Do you think you can share an application binary and file a feedback assistant ticket so we could have a look?
Eugene.
Post
Replies
Boosts
Views
Activity
Hi Thierry!
Technically you have unlimited options to render your contents to... CoreGraphics seems like a good one to start with.
You'd need double or triple-buffers drawables anyway to be able to pipeline submissions and "freeze" the buffers on CPU while sending-over-usb is happening. Also, make sure bandwidth is enough for your case with the USB bandwidth you have.
Really, start with a framework which will give you CPU-visible linear pixels representation in the most straightforward way. I can't recommend you any, since it depends on what you are used to.
Regards,
Eugene.
Hi hawkiyc!
Thank you so much for reporting this issue. Team is aware of it, reproduced it and working on a fix. There is no known workaround at this time. The fix will be provided in the upcoming seeds.
Please file FB assistant ticket and put its number there, so we could update you on a progress.
Have a great day!
Hi dandiep!
Could you please provide the network or small repro case so we can investigate that?
Thanks.
Hi dbl001!
You are using tensorflow-macos and not tensorfow-metal. You should use the latter one.
Could you please refer to this page: https://developer.apple.com/metal/tensorflow-plugin/
Thanks and please let us know.
Hi Aaargh! I am glad you are having a workaround at least.
Can you confirm the OS you are running at? AMD drivers and compiler are part of the macOS stack, so there is a chance the issue is already fixed in macOS12 beta.
Simple reproducer would help localizing the issue and providing a fix, otherwise we can't do anything about it. I hope you could provide a reproducer and a FB ticket number. Thanks!
Hi Aaargh!
Could you please confirm the following:
You are running recent macOS12 beta with recent Xcode beta (to make sure your GPU compiler stack is most recent)
You are able to attach a reproducer
After it is confirmed, could you please file a Feedback Assistant issue with the repro attached and come back here with the FB number? Thanks!
Hi jcookie!
When using threadgroup memory in your compute kernel, you basically use the same "local" memory as ImageBlock uses. That was exactly what Harsh has mentioned - you can explicitly use TileMemory by declaring threadgroup memory allocation. In other APIs, these type of memory is called "shared" or "local", for your reference.
Below is basic compute example (with no threadgroup usage, but you can get the idea): https://developer.apple.com/documentation/metal/processing_a_texture_in_a_compute_function?language=objc
I can't immediately find an example on official Apple Developer website. The idea is you first bring texture/buffer data to threadgroup memory in your shader, then you do all the calculations ONLY on this local threadgroup memory. Since this memory is much faster (though banks should be kept in mind), you can pack more ALUs and do more work while waiting less for the memory. In the end of the computation, you write (flush) threadgroup memory to device memory. That was what Harsh called "flush" and you should do it in the compute kernel yourself.