I don't know "why" this happens but for me it only happens when I'm computing gradients against a loss tensor that is non-flat. Oddly, even with this error, my model still trained.
Post
Replies
Boosts
Views
Activity
It should be noted that with tensorflow-metal 0.3.0 the behavior is even more bizarre. Instead of the tail of the list being filled with -0s, it is now filled with random values.
I can't seem to edit my question now but obviously, this issue is with Tensorflow 2.6, not 3.6 which doesn't exist yet.
An alternative to uninstalling tensorflow-metal is to disable GPU usage. This is a copy-paste from my other post...
To disable the GPU completely on the M1 use tf.config.experimental.set_visible_devices([], 'GPU'). To disable the GPU for certain operations, use:
with tf.device('/cpu:0'):
# tf calls here
To disable the GPU completely on the M1 use tf.config.experimental.set_visible_devices([], 'GPU'). To disable the GPU for certain operations, use:
with tf.device('/cpu:0'):
# tf calls here
Someone else will have to answer the questions on RNNs and mlcompute.