Post

Replies

Boosts

Views

Activity

Reply to M1 Mac mini GPU acting strange during tensorflow-metal tests
`import tensorflow as tf tf.config.list_physical_devices() [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]` As soon as I try to run a keras based model it dies with: 2021-11-08 19:11:56.350233: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.2 AVX AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-11-08 19:11:56.350804: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2021-11-08 19:11:56.351033: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) 2021-11-08 19:11:56.512351: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing. 2021-11-08 19:11:56.512369: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started. 2021-11-08 19:11:56.512818: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down. 2021-11-08 19:11:57.362096: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Nov ’21
Reply to Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
Metal device set to: AMD Radeon Pro 5600M 2022-01-13 17:02:36.447465: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-01-13 17:02:36.448221: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-01-13 17:02:36.448581: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) Prior to running my model: print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) Num GPUs Available: 1 I was excited to see an tensorflow-apple version 2.7 but STILL does not work.
Jan ’22
Reply to I can't install TensorFlow-macos and TensorFlow-metal
I'll add that like others, python 3.8.X is required. Then run: SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-macos (this will give you tensorflow-macos version 2.7 - don't get excited yet) SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-metal (this will also succeed) Install your other stuff as normal with conda (pandas, scikit-learn, jupyterlab, etc) Then run your code. You can run: print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) Num GPUs Available: 1 So far so good right?!?! Looking Good! Now run your model... First line: Metal device set to: AMD Radeon Pro 5600M keeps looking better and better !!! and then the kernel dies... 2022-01-13 17:02:36.447465: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-01-13 17:02:36.448221: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-01-13 17:02:36.448581: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) Will tensorflow-macos ever work for Intel ?
Jan ’22
Reply to Tensorflow-macos and tensorflow-metal still cause kernal panic
I tried running my script outside of jupyter lab and the error is as follows: 2022-03-29 13:26:37.306914: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-03-29 13:26:37.307420: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) Model: "sequential" Layer (type) Output Shape Param # masking (Masking) (None, 1, 28) 0 layer1 (Bidirectional) (None, 1, 128) 47616 dropout (Dropout) (None, 1, 128) 0 layer2 (Bidirectional) (None, 1, 128) 98816 dropout_1 (Dropout) (None, 1, 128) 0 layer3 (Bidirectional) (None, 128) 98816 Output (Dense) (None, 1) 129 ================================================================= Total params: 245,377 Trainable params: 245,377 Non-trainable params: 0 Epoch 1/3000 2022-03-29 13:26:54.863499: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:26:57.201 python[66880:3338950] -[MPSGraph adamUpdateWithLearningRateTensor:beta1Tensor:beta2Tensor:epsilonTensor:beta1PowerTensor:beta2PowerTensor:valuesTensor:momentumTensor:velocityTensor:gradientTensor:name:]: unrecognized selector sent to instance 0x600041707c60 zsh: segmentation fault python test.py I was able to run another tensorflow based script on cmd line so there is something specific to this error that I am getting.
Mar ’22
Reply to [MPSGraph adamUpdateWithLearningRateTensor:beta1Tensor:beta2Tensor:epsilonTensor:beta1PowerTensor:beta2PowerTensor:valuesTensor:momentumTensor:velocityTensor:gradientTensor:name:]: unrecognized selector sent to instance 0x600000eede10
I have a similar issue with kernel crashing with Adam: Model: "sequential" Layer (type) Output Shape Param # masking (Masking) (None, 1, 28) 0 layer1 (Bidirectional) (None, 1, 128) 47616 dropout (Dropout) (None, 1, 128) 0 layer2 (Bidirectional) (None, 1, 128) 98816 dropout_1 (Dropout) (None, 1, 128) 0 layer3 (Bidirectional) (None, 128) 98816 Output (Dense) (None, 1) 129 ================================================================= Total params: 245,377 Trainable params: 245,377 Non-trainable params: 0 Epoch 1/3000 2022-03-29 13:26:54.863499: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. **2022-03-29 13:26:57.201 python[66880:3338950] -[MPSGraph adamUpdateWithLearningRateTensor:beta1Tensor:beta2Tensor:epsilonTensor:beta1PowerTensor:beta2PowerTensor:valuesTensor:momentumTensor:velocityTensor:gradientTensor:name:]: unrecognized selector sent to instance 0x600041707c60 ** zsh: segmentation fault python test.py
Mar ’22
Reply to [MPSGraph adamUpdateWithLearningRateTensor:beta1Tensor:beta2Tensor:epsilonTensor:beta1PowerTensor:beta2PowerTensor:valuesTensor:momentumTensor:velocityTensor:gradientTensor:name:]: unrecognized selector sent to instance 0x600000eede10
I tried swapping Adam for RMSprop - still get an error - but now it's a floating point error. 2022-03-29 13:35:35.219059: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-03-29 13:35:35.219398: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) Model: "sequential" Layer (type) Output Shape Param # masking (Masking) (None, 1, 28) 0 layer1 (Bidirectional) (None, 1, 128) 47616 dropout (Dropout) (None, 1, 128) 0 layer2 (Bidirectional) (None, 1, 128) 98816 dropout_1 (Dropout) (None, 1, 128) 0 layer3 (Bidirectional) (None, 128) 98816 Output (Dense) (None, 1) 129 ================================================================= Total params: 245,377 Trainable params: 245,377 Non-trainable params: 0 Epoch 1/3000 2022-03-29 13:35:51.773828: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:35:55.049448: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:35:55.409149: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:35:58.358459: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:35:58.544457: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:35:58.809214: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:35:58.989445: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:36:01.117681: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2022-03-29 13:36:01.338093: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. zsh: floating point exception python test.py
Mar ’22
Reply to [MPSGraph adamUpdateWithLearningRateTensor:beta1Tensor:beta2Tensor:epsilonTensor:beta1PowerTensor:beta2PowerTensor:valuesTensor:momentumTensor:velocityTensor:gradientTensor:name:]: unrecognized selector sent to instance 0x600000eede10
I have that exact same problem: [MPSGraph adamUpdateWithLearningRateTensor:beta1Tensor:beta2Tensor:epsilonTensor:beta1PowerTensor:beta2PowerTensor:valuesTensor:momentumTensor:velocityTensor:gradientTensor:name:]: unrecognized selector sent to instance 0x60002836b9c0 I tried switching Adam to RMSprop and then I get floating point exception.
Mar ’22
Reply to Will tensorflow-metal ever work with AMD chip?
tensorboard 2.11.2 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow-estimator 2.11.0 pypi_0 pypi tensorflow-io-gcs-filesystem 0.29.0 pypi_0 pypi tensorflow-macos 2.11.0 pypi_0 pypi tensorflow-metal 0.7.0 pypi_0 pypi Running the same python test script at the apple metal page: 2023-01-20 12:52:34.536215: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Downloading data from https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz 169001437/169001437 [==============================] - 7s 0us/step 2023-01-20 12:53:02.967585: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Metal device set to: AMD Radeon Pro 5600M systemMemory: 64.00 GB maxCacheSize: 3.99 GB 2023-01-20 12:53:02.968211: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2023-01-20 12:53:02.968256: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) Epoch 1/5 /opt/anaconda3/envs/applemetal/lib/python3.10/site-packages/keras/backend.py:5585: UserWarning: "sparse_categorical_crossentropy received from_logits=True, but the output argument was produced by a Softmax activation and thus does not represent logits. Was this intended? output, from_logits = _get_logits( 2023-01-20 12:53:16.475463: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled. 2023-01-20 12:53:25.908578: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x7f9194090a60 2023-01-20 12:53:25.908651: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x7f9194090a60 .... Traceback (most recent call last): File "/Users/ray/test.py", line 13, in model.fit(x_train, y_train, epochs=5, batch_size=64) File "/opt/anaconda3/envs/applemetal/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/opt/anaconda3/envs/applemetal/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error: Detected at node 'StatefulPartitionedCall_212' defined at (most recent call last): File "/Users/ray/test.py", line 13, in model.fit(x_train, y_train, epochs=5, batch_size=64) File "/opt/anaconda3/envs/applemetal/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler return fn(*args, **kwargs) File "/opt/anaconda3/envs/applemetal/lib/python3.10/site-packages/keras/engine/training.py", line 1650, in fit tmp_logs = self.train_function(iterator) File "/opt/anaconda3/envs/applemetal/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in train_function .... ....
Jan ’23