In my iMac 5k Retina 2017 with Radeon Pro 580 I've been able to get to a simpler testccase (a hidden layer with MNIST)
Moreover, I noticed that a set of random values gets more decimal figures in GPU than in CPU.
So with this code
import tensorflow as tf # TensorFlow registers PluggableDevices here.
with tf.device("/GPU:0"):
tf.random.set_seed(1972)
agpu = tf.random.normal(shape=[5], dtype=tf.float32)
print("GPU - Random",agpu)
with tf.device("/CPU:0"):
tf.random.set_seed(1972)
acpu = tf.random.normal(shape=[5], dtype=tf.float32)
print("CPU - Random",acpu)
print("equal ", tf.equal(agpu, acpu))
I get
GPU - Random tf.Tensor([-0.88528407 0.33968228 -2.0363083 1.1200726 -1.0055897 ], shape=(5,), dtype=float32)
CPU - Random tf.Tensor([-0.8852841 0.3396823 -2.036308 1.1200724 -1.00559 ], shape=(5,), dtype=float32)
equal tf.Tensor([False False False False False], shape=(5,), dtype=bool)
If I do the same in colab (Google) I get
GPU - Random tf.Tensor([-0.8852841 0.3396823 -2.036308 1.1200724 -1.00559 ], shape=(5,), dtype=float32)
CPU - Random tf.Tensor([-0.8852841 0.3396823 -2.036308 1.1200724 -1.00559 ], shape=(5,), dtype=float32)
equal tf.Tensor([ True True True True True], shape=(5,), dtype=bool)```