That fixed the import tensorflow issue. I installed on a brand new Mac installation with a new user account - so perhaps include that in the setup instructions?
Next error comes when trying to train a model:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users//<user>/miniconda/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/var/folders/v0/w_k546h500q00yr1lhwd78640000gn/T/__autograph_generated_filenv9ppeuc.py", line 15, in tf__train_function
retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
File "/Users//<user>/miniconda/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 1557, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/transformers/optimization_tf.py", line 246, in apply_gradients
return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs)
TypeError: in user code:
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in train_function *
return step_function(self, iterator)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/engine/training.py", line 1233, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/engine/training.py", line 1222, in run_step **
outputs = model.train_step(data)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 1557, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize
self.apply_gradients(grads_and_vars)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/transformers/optimization_tf.py", line 246, in apply_gradients
return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients
return super().apply_gradients(grads_and_vars, name=name)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 632, in apply_gradients
self._apply_weight_decay(trainable_variables)
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1159, in _apply_weight_decay
tf.__internal__.distribute.interim.maybe_merge_call(
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1155, in distributed_apply_weight_decay
distribution.extended.update(
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1149, in weight_decay_fn **
if self._use_weight_decay(variable):
File "/Users/<user>/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 587, in _use_weight_decay
for exclude_id in exclude_from_weight_decay:
TypeError: 'NoneType' object is not iterable
I have to revert to earlier versions to continue my work still. But they're problem (tensorflow-metal 0.5.0 and tensorflow-macos 2.9, but that stops randomly during training. Not sure if there is any stable configuration.