Same error messages while running code from Deep Learning with Python, listing 6.23. Dies in both Jupyter Lab and command line python at the model.fit() line.
M1 iMac, 16 GB, tensorflow-macos 2.7, metal plugin, Python 3.9.5.
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
Post
Replies
Boosts
Views
Activity
Same issue. This code is part of Francois Chollet's book, "Deep Learning with Python" so I expect many are having this issue.
The suggested solution does not work, either when I wrap the whole Jupyter cell in the with tf.device statement or when I wrap the specific line "augmented_images = data_augmentation(images)" in the tf.device statement.
This code segment is a prerequisite for further exercises in a later chapter, which amplifies the desire for a solution.
Thanks,
Brett
(Edit: Apparently you can't do code blocks in comments. Sorry). Moving on to the next section of the book, we have the same issue with GRU layers. Recurrent_dropout causes a forced kernel restart.
The error given is: WARNING:tensorflow:Layer gru will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Clearly, the fallback is not graceful.
inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1]))
x = layers.GRU(32, recurrent_dropout=0.5, return_sequences=True)(inputs)
x = layers.GRU(32, recurrent_dropout=0.5)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
Hi, and thanks for the response.
I can confirm that blocking with tf.device('/device:CPU:0'): does allow the code to run without fault though obviously with a performance penalty.
However, I did run the code (without the tf.device statement) from the command line and got the fatal error shown below.
To test further, I also set up an account on Paperspace. The code runs correctly (with recurrent_dropout and without tf.device) inside their Jupyter notebooks (which they call Gradient notebooks).
Summarizing:
The code works in Google Colab and Paperspace in their Jupyter-based notebooks.
It crashes on my M1 Mac both in Jupyter (IPython) and at the command line (straight Python 3.9.5)
Fatal error at the command line
WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/50
2022-01-31 21:29:47.106964: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2022-01-31 21:29:47.365931: F tensorflow/core/framework/tensor.cc:681] Check failed: IsAligned() ptr = 0x17adba1f0
zsh: abort python test.py