I was working on my project and when I tried to train a model the kernel crashed, so I restarted the kernel and tried the same and still I got the same crashing issue. Then I read one of the thread having the same issue where the apple support was saying to install tensorflow-macos and tensorflow-metal and read the guide from this site:
https://developer.apple.com/metal/tensorflow-plugin/
and I did so, I tried every single thing and when I tried the test code provided in the site, I got the same error, here's the code and the output.
Code:
import tensorflow as tf
cifar = tf.keras.datasets.cifar100
(x_train, y_train), (x_test, y_test) = cifar.load_data()
model = tf.keras.applications.ResNet50(
include_top=True,
weights=None,
input_shape=(32, 32, 3),
classes=100,)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
model.fit(x_train, y_train, epochs=5, batch_size=64)
and here's the output:
Epoch 1/5
The Kernel crashed while executing code in the current cell or a previous cell.
Please review the code in the cell(s) to identify a possible cause of the failure.
Click here for more info.
View Jupyter log for further details.
And here's the half of log file as it was not fully coming:
metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M1
2024-10-06 23:30:49.894405: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 8.00 GB
2024-10-06 23:30:49.894420: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 2.67 GB
2024-10-06 23:30:49.894444: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-10-06 23:30:49.894460: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
2024-10-06 23:30:56.701461: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type GPU is enabled.
[libprotobuf FATAL google/protobuf/message_lite.cc:353] CHECK failed: target + size == res:
libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: target + size == res:
Please respond to this post as soon as possible as I am working on my project now and getting this error again n again.
Device: Apple MacBook Air M1.
General
RSS for tagExplore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.
Post
Replies
Boosts
Views
Activity
WWDC 2024 mentioned that the OCR feature from the Vision framework has support for "Korean, Swedish, and Chinese", but the Swedish support does not seem to be available...
Running either
print(try? VNRecognizeTextRequest().supportedRecognitionLanguages())
or
var ocrRequest = RecognizeTextRequest(.revision3)
print(ocrRequest.supportedRecognitionLanguages)
did not print out Swedish as one of the supported languages, but Korean and Chinese are.
Tested on early versions of iOS 18 developer beta, and the latest version of iOS 18.1 (22B5054e).
Hi Ty for playing
Hey everyone,
I've been updating my code to take advantage of the new Vision API for text recognition in macOS 15. I'm noticing some very odd behavior though, it seems like in general the new Vision API consistently produces worse results than the old API. For reference here is how I'm setting up my request.
var request = RecognizeTextRequest()
request.recognitionLevel = getOCRMode() // generally accurate
request.usesLanguageCorrection = !disableLanguageCorrection // generally true
request.recognitionLanguages = language.split(separator: ",").map { Locale.Language(identifier: String($0)) } // generally 'en'
let observations = try? await request.perform(on: image) as [RecognizedTextObservation]
Then I will process the results and just get the top candidate, which as mentioned above, typically is of worse quality then the same request formed with the old API.
Am I doing something wrong here?
Almost all the functions in Accelerate are for single precision (Float) and double precision (Double) operations. However, I stumbled upon three integer arithmetic functions which operate on Int32 values. Are there any more functions in Accelerate that operate on integer values? If not, then why aren't there more functions that work with integers?
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code:
func processHorizonImage(_ ciImage: CIImage) async {
let request = DetectHorizonRequest()
do {
let result = try await request.perform(on: ciImage)
print(result)
} catch {
print(error)
}
}
After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following:
The input CIImage is valid and contains a visible horizon.
No errors are being thrown.
The relevant frameworks are properly imported.
Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue.
Thank you for your support!
This is the image
Hi All,
Is it possible to record a video using the Object Capture instead of taking a series of pictures ?
Is it possible to get the bounding box coordinates of the object we capture ?
Hello.
I can't find anything about the SSML that is used in Apple's speech synthesis.
SSML from Google, Amazon and W3C either don't work or work incorrectly.
Where is Apple's documentation for their implementation of SSML?
hi,
I am currently running LSTM on TensorFlow. However, when i switched from keras2 to keras3. code running time has increased 10 times -- it seems there is no GPU acceleration.
Here is my code:
batch size = 256
optimiser = adam
activation = tanh
_______________________________________________
Layer (type) Output Shape Param #
=============================================
input_1 (InputLayer) [(None, 7, 16)] 0
bidirectional (Bidirection (None, 7, 320) 226560
al)
bidirectional_1 (Bidirecti (None, 7, 512) 1181696
onal)
bidirectional_2 (Bidirecti (None, 256) 656384
onal)
dense (Dense) (None, 1) 257
==============================================
Total params: 2064897 (7.88 MB)
Trainable params: 2064897 (7.88 MB)
Non-trainable params: 0 (0.00 Byte)
______________________________________________
This is keras 3.6.0 + tensorflow 2.17.0 + tensorflow-metal 1.1.0 training status:
Training------------
Epoch 1/200
28/681 ━━━━━━━━━━━━━━━━━━━━ 8:13 756ms/step - loss: 0.5901 - mape: 338.6876 - mse: 0.8591
This is keras 2.14.0 + tensorflow 2.14.0 + tensorflow-metal 1.1.0 training status:
Training------------
Epoch 1/200
681/681 [==============================] - 37s 49ms/step - loss: 3.6345 - mape: 499038.7500 - mse: 34.4148 - val_loss: 3.5452 - val_mape: 41.7964 - val_mse: 32.0133 - lr: 0.0010
Is that because keras3 has no GPU support on macos?
Apart from that, if I change LSTM activation from tanh to sigmoid in keras2, it does not have GPU support as well.
My system is 15.0.1 and the code was running on python3.11
I am not sure why these happen.
Thanks
I'm using the iOS 18.2 beta on my iPhone 15 Pro Max, but can't find Apple Intelligence, and the Settings app still shows the Old Siri logo.
Is it just me or is early access image playground not available, been waiting for a little over 24hrs and still no access. (no rush for the team if there’s smth wrong) they might be busy rolling out the first few apple intelligence features (ios 18.1) public release.
what am I not understanding here.
in short the view loads text from the jsons descriptions and then should filter out the words. and return and display a list of most used words, debugging shows words being identified by the code but does not filter them out
private func loadWordCounts() {
DispatchQueue.global(qos: .background).async {
let fileManager = FileManager.default
guard let documentsDirectory = try? fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) else { return }
let descriptions = loadDescriptions(fileManager: fileManager, documentsDirectory: documentsDirectory)
var counts = countWords(in: descriptions)
let tagsToRemove: Set<NLTag> = [
.verb,
.pronoun,
.determiner,
.particle,
.preposition,
.conjunction,
.interjection,
.classifier
]
for (word, _) in counts {
let tagger = NLTagger(tagSchemes: [.lexicalClass])
tagger.string = word
let (tag, _) = tagger.tag(at: word.startIndex, unit: .word, scheme: .lexicalClass)
if let unwrappedTag = tag, tagsToRemove.contains(unwrappedTag) {
counts[word] = 0
}
}
DispatchQueue.main.async {
self.wordCounts = counts
}
}
}
When I import starts models in Jupyter notebook, I ge the following error:
ImportError: dlopen(/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <5ACBAA79-2387-3BEF-9F8E-6B7584B0F5AD> /opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so
Reason: tried: '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache). What should I do?
What are the major differences in review process for AI based apps vis a vis normal apps for Apple store?
Hello,
I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference:
AttributeError: 'float' object has no attribute 'astype'
Here is the relevant part of the error traceback:
File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value
return input_var.val.astype(dtype=type_map[dtype_val])
I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x.
Has anyone encountered a similar issue or can offer suggestions on how to resolve this?
I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed.
Platform and package versions:
coremltools 6.1
tensorflow 2.10.0
tensorflow-estimator 2.10.0
tensorflow-hub 0.16.1
tensorflow-io-gcs-filesystem 0.37.1
Python 3.10.12
pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10)
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any help or pointers would be greatly appreciated!
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates.
func sliceUpdateDataTensor(
_ dataTensor: MPSGraphTensor,
update updateTensor: MPSGraphTensor,
starts: [NSNumber],
ends: [NSNumber],
strides: [NSNumber],
startMask: UInt32,
endMask: UInt32,
squeezeMask: UInt32,
name: String?
) -> MPSGraphTensor
Hi all,
When executing an HLO program using the JAX metal PJRT plugin, the program fails due to an unsupported data type returned by the rng_bit_generator operation.
The generated HLO includes:
%output_state, %output = "mhlo.rng_bit_generator"(%1) <{rng_algorithm = #mhlo.rng_algorithm<PHILOX>}> : (tensor<3xi64>) -> (tensor<3xi64>, tensor<3xui32>)
The error message indicates that:
Metal only supports MPSDataTypeFloat16, MPSDataTypeBFloat16, MPSDataTypeFloat32, MPSDataTypeInt32, and MPSDataTypeInt64.
The use of ui32 seems to be incompatible with Metal’s allowed types.
I’m trying to understand if the ui32 output is the problem or maybe the use of rng_bit_generator is wrong.
Could you clarify if there is a workaround or planned support for ui32 output in this context? Alternatively, guidance on configuring rng_bit_generator for compatibility with Metal’s supported types would be greatly appreciated.
Hi, The most recent version of tensorflow-metal is only available for macosx 12.0 and python up to version 3.11. Is there any chance it could be updated with wheels for macos 15 and Python 3.12 (which is the default version supported for tensrofllow 2.17+)? I'd note that even downgrading to Python 3.11 would not be sufficient, as the wheels only work for macos 12.
Thanks.
Hi Everyone,
I'm currently facing an issue where TensorFlow is unable to detect the GPU on my M1 Mac for model training. When I run the following code to check for available GPUs:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
Num GPUs Available: 0
I have already applied the steps mentioned in the developer apple document.
https://developer.apple.com/metal/tensorflow-plugin/
System Information:
Device: M1 Mac Pro Max
Python Version: 3.12.2
TensorFlow Version: 2.17.0
OS: macOS Sequoia (15.1)
Questions:
Is there any additional configuration required to enable GPU support on M1 Macs?
Are there specific TensorFlow versions that I should be using for better compatibility?
Has anyone else faced this issue, and how did you resolve it?