Using multiple GPUs in Tensorflow-mac for M1

It's clearly stated at https://developer.apple.com/metal/tensorflow-plugin/ that multiple GPUs are still not supported by Tensorflow-mac, but I've toying around with some examples and benchmarking it's performance against Colab and Intel/NVIDIA setups and got this testing a NLP classification task (time per epoch while training, smaller is better):

My question is: can we expect, once multi-GPU is available for the M1, an increase in performance - maybe close to 8x if the 8 GPU cores become available, and would the GPU cores will be seen as a single GPU as the NVIDIA cards or we will need to use a distribution strategy to be able to use them in parallel?

Thanks!

Answered by Frameworks Engineer in 688286022

Hi @eduardofv, all the GPU cores in M1 are seen as single GPU and won't need distribution strategy.

Accepted Answer

Hi @eduardofv, all the GPU cores in M1 are seen as single GPU and won't need distribution strategy.

Great, thanks! Performance looks better when the batch size is better adjusted. A final question: does the GPUs in the M1 have access to the full 16GB of RAM? In NVIDIA cards VRAM can be a huge limitation when training bigger models - I would rather have an older card with more VRAM than a newer card with less (like the 4GB in the Quadro T2000)... would this be an advantage of the M1?

Using multiple GPUs in Tensorflow-mac for M1
 
 
Q