Posts

Post not yet marked as solved
1 Replies
1.8k Views
I am training a simple Neural Network on my M1 Max with the following code in Tensorflow: import tensorflow as tf def get_and_pad_imdb_dataset(num_words=10000, maxlen=None, index_from=2): from tensorflow.keras.datasets import imdb # Load the reviews (x_train, y_train), (x_test, y_test) = imdb.load_data(path='imdb.npz', num_words=num_words, skip_top=0, maxlen=maxlen, start_char=1, oov_char=2, index_from=index_from) x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train, maxlen=None, padding='pre', truncating='pre', value=0) x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test, maxlen=None, padding='pre', truncating='pre', value=0) return (x_train, y_train), (x_test, y_test) def get_imdb_word_index(num_words=10000, index_from=2): imdb_word_index = tf.keras.datasets.imdb.get_word_index( path='imdb_word_index.json') imdb_word_index = {key: value + index_from for key, value in imdb_word_index.items() if value <= num_words-index_from} return imdb_word_index (x_train, y_train), (x_test, y_test) = get_and_pad_imdb_dataset(maxlen=25) imdb_word_index = get_imdb_word_index() max_index_value = max(imdb_word_index.values()) embedding_dim = 16 model = tf.keras.Sequential([ tf.keras.layers.Embedding(input_dim = max_index_value+1, output_dim = embedding_dim, mask_zero = True), tf.keras.layers.LSTM(units = 16), tf.keras.layers.Dense(units = 1, activation = 'sigmoid') ]) model.compile(loss = 'binary_crossentropy', metrics = ['accuracy'], optimizer = 'adam') history = model.fit(x_train, y_train, epochs=3, batch_size = 32) I ran this code on Google Colab and it works perfectly fine without any problem at all. However, on my M1 Max it just gets stucked at the very first epoch and it does not progress at all (even after a couple of hours). This is all I get from the output after calling the .fit method: 2022-02-15 23:44:20.097795: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz Epoch 1/3 2022-02-15 23:44:22.461438: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled. I installed tensorflow on my machine following this guide: https://developer.apple.com/metal/tensorflow-plugin/ I am using a Conda environment with miniforge and the tensorflow related package (obtained with conda list) are: tensorboard 2.6.0 pyhd8ed1ab_1 conda-forge tensorboard-data-server 0.6.0 py39hfb8cd70_1 conda-forge tensorboard-plugin-wit 1.8.0 pyh44b312d_0 conda-forge tensorflow-deps 2.7.0 0 apple tensorflow-estimator 2.7.0 pypi_0 pypi tensorflow-macos 2.7.0 pypi_0 pypi tensorflow-metal 0.3.0 pypi_0 pypi My python version is 3.9.0
Posted Last updated
.