InvalidArgumentError: Cannot assign a device for operation model/bert_block/encoder_0/multiheadattention/query/einsum/Einsum/ReadVariableOp

Hi,I cannot predict with my model on Apple M1pro. I get a error:

Traceback (most recent call last):

  File "transformer_pe.py", line 605, in

    bgru_main()

  File "transformer_pe.py", line 400, in bgru_main

    main(traindataSetPath, weightPath, batchSize, maxLen, vectorDim, layers, dropout)

  File "transformer_pe.py", line 358, in main

    model.fit(x_train, y_train, batch_size=batchSize, epochs=10)

  File "/Users/icey/miniforge3/envs/py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1183, in fit

    tmp_logs = self.train_function(iterator)

  File "/Users/icey/miniforge3/envs/py38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 889, in call

    result = self._call(*args, **kwds)

  File "/Users/icey/miniforge3/envs/py38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 950, in _call

    return self._stateless_fn(*args, **kwds)

  File "/Users/icey/miniforge3/envs/py38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3023, in call

    return graph_function._call_flat(

  File "/Users/icey/miniforge3/envs/py38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1960, in _call_flat

    return self._build_call_outputs(self._inference_function.call(

  File "/Users/icey/miniforge3/envs/py38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 591, in call

    outputs = execute.execute(

  File "/Users/icey/miniforge3/envs/py38/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute

    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,

tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation model/bert_block/encoder_0/multiheadattention/query/einsum/Einsum/ReadVariableOp: Could not satisfy explicit device specification '' because the node {{colocation_node model/bert_block/encoder_0/multiheadattention/query/einsum/Einsum/ReadVariableOp}} was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:GPU:0]. 

Colocation Debug Info:

Colocation group had the following types and supported devices: 

Root Member(assigned_device_name_index_=2 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]

ResourceApplyAdaMax: CPU 

ReadVariableOp: GPU CPU 

_Arg: GPU CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:

  model_bert_block_encoder_0_multiheadattention_query_einsum_einsum_readvariableop_resource (_Arg)  framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0

  adamax_adamax_update_2_resourceapplyadamax_m (_Arg)  framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0

  adamax_adamax_update_2_resourceapplyadamax_v (_Arg)  framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0

  model/bert_block/encoder_0/multiheadattention/query/einsum/Einsum/ReadVariableOp (ReadVariableOp) 

  Adamax/Adamax/update_2/ResourceApplyAdaMax (ResourceApplyAdaMax) /job:localhost/replica:0/task:0/device:GPU:0

 [[{{node model/bert_block/encoder_0/multiheadattention/query/einsum/Einsum/ReadVariableOp}}]] [Op:__inference_train_function_3353]

(py38) icey@IceydeMacBook-Pro 20220204code % python hello.py         

Metal device set to: Apple M1 Pro

systemMemory: 32.00 GB

maxCacheSize: 10.67 GB

Hi @icey34,

Thanks for reporting the issue! So the error message means that we are missing the GPU registration ResourceApplyAdaMax op used here. We are now tracking this op to be implemented and will update here when its available. In the meantime any portion of the code using this op needs to explicitly run on the CPU for example by using with tf.device('/CPU:0'): block.

Any news here? im having a similar issue

Hello there !

I am running into the same error with the ReadVariableOp using the Adam optimizer. Changing to Adam solved the issue. Do we have an update on the above ?

Hi there! I've been getting the same error while using RectifiedAdam and Lookahead on my m1 max.

Traceback (most recent call last):

File "/Users/netanel/dev/models/transformer_regressor.py", line 105, in fit model.fit(train_X, train_y, epochs=200, batch_size=128,

File "/opt/homebrew/Caskroom/miniconda/base/envs/dl/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None

File "/opt/homebrew/Caskroom/miniconda/base/envs/dl/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 53, in quick_execute

tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation model/multi_head_attention/query/einsum/Einsum/ReadVariableOp: Could not satisfy explicit device specification '' because the node {{colocation_node model/multi_head_attention/query/einsum/Einsum/ReadVariableOp}} was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:GPU:0].

Do we have any news?

InvalidArgumentError: Cannot assign a device for operation model/bert_block/encoder_0/multiheadattention/query/einsum/Einsum/ReadVariableOp
 
 
Q