This is a simple program I just downloaded to test. Each epoch takes about 6s on the M1 MBA, but 1s on the Intel MBP. But all my programs run slow. Yes, the examples I have been running are fairly small.
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
predictions = model(x_train[:1]).numpy()
tf.nn.softmax(predictions).numpy()
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_fn(y_train[:1], predictions).numpy()
model.compile(optimizer = 'sgd', loss = loss_fn)
model.fit(x_train, y_train, epochs=10)
Post
Replies
Boosts
Views
Activity
Now I tried a tutorial example from Google:
https://www.tensorflow.org/tutorials/quickstart/advanced
That one runs about twice as fast on my M1 MBA as on my Intel MBP. Perhaps the example I put in the previous post is not well-suited for GPU? One would then hope that the metal framework could make a choice to run it on CPU (my experience is that the M1 as about twice as fast as Intel in running scientific computations on CPU).
Anyway, I think I will upgrade my 16" Intel MBP to a 16" M1 MBP, hoping that the TF metal framework continues to be developed.