Post

Replies

Boosts

Views

Activity

Performance issue on Macbook Pro M1
System information Script can be found below MacBook Pro M1 (Mac OS Big Sir (11.5.1)) TensorFlow installed from (source) TensorFlow version (2.5 version) with Metal Support Python version: 3.9 GPU model and memory: MacBook Pro M1 and 16 GB Steps needed for installing Tensorflow with metal support. https://developer.apple.com/metal/tensorflow-plugin/ I am trying to train a model on Macbook Pro M1, but the performance is so bad and the train doesn't work properly. It takes a ridiculously long time just for a single epoch. Code needed for reproducing this behavior. import tensorflow as tf from tensorflow.keras.datasets import imdb from tensorflow.keras.layers import Embedding, Dense, LSTM from tensorflow.keras.losses import BinaryCrossentropy from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing.sequence import pad_sequences # Model configuration additional_metrics = ['accuracy'] batch_size = 128 embedding_output_dims = 15 loss_function = BinaryCrossentropy() max_sequence_length = 300 num_distinct_words = 5000 number_of_epochs = 5 optimizer = Adam() validation_split = 0.20 verbosity_mode = 1 # Disable eager execution tf.compat.v1.disable_eager_execution() # Load dataset (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=num_distinct_words) print(x_train.shape) print(x_test.shape) # Pad all sequences padded_inputs = pad_sequences(x_train, maxlen=max_sequence_length, value = 0.0) # 0.0 because it corresponds with <PAD> padded_inputs_test = pad_sequences(x_test, maxlen=max_sequence_length, value = 0.0) # 0.0 because it corresponds with <PAD> # Define the Keras model model = Sequential() model.add(Embedding(num_distinct_words, embedding_output_dims, input_length=max_sequence_length)) model.add(LSTM(10)) model.add(Dense(1, activation='sigmoid')) # Compile the model model.compile(optimizer=optimizer, loss=loss_function, metrics=additional_metrics) # Give a summary model.summary() # Train the model history = model.fit(padded_inputs, y_train, batch_size=batch_size, epochs=number_of_epochs, verbose=verbosity_mode, validation_split=validation_split) # Test the model after training test_results = model.evaluate(padded_inputs_test, y_test, verbose=False) print(f'Test results - Loss: {test_results[0]} - Accuracy: {100*test_results[1]}%') I have noticed this same problem with LSTM layers Also, this issue is been reported in Keras and they can't debug. Keras issue https://github.com/keras-team/keras/issues/15003
15
2
7.4k
Jul ’21