Post

Replies

Boosts

Views

Activity

Reply to 🤔 GitHub tensorflow macOS alpha had better performance on M1?
I had a bit of a look into how this was performing on my system (13" M1 MacBook Air). Using tensorFlow-metal pluggable device I had total training and testing time of 62.52s. However, when training this on the CPU only I had training and testing time of 9.41s. I never managed to successfully install the original apple tf alpha so I can't directly test that but I am guessing that it allowed this to train and test on the CPU. I have done a bunch of other testing (as have others) that show that for small models and small image dimensions the CPU is faster than the GPU. Once the model, batch size and image size become a bit larger the GPU becomes faster. For example, using EfficientNetB0 against CIFAR100, image size 32x32 is consistently faster on CPU, image size 64x64 is pretty even and image size 128x128 is generally faster on GPU. Compared to Google Colab, a similar patter emerges. For small models, batch size and image size the M1 compares well but as the model and the data become larger the Colab GPU powers ahead. This has captured my interest because of the rumours of the M1X with double the CPU high performance cores and quadruple the GPU cores. If that turns out to be true then the Apple machines could become genuinely capable AI development systems (at a very competitive price). Fingers crossed :).
Sep ’21