Explore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.

Post

Replies

Boosts

Views

Activity

Unsupported type in JAX metal PJRT plugin with rng_bit_generator
Hi all, When executing an HLO program using the JAX metal PJRT plugin, the program fails due to an unsupported data type returned by the rng_bit_generator operation. The generated HLO includes: %output_state, %output = "mhlo.rng_bit_generator"(%1) <{rng_algorithm = #mhlo.rng_algorithm<PHILOX>}> : (tensor<3xi64>) -> (tensor<3xi64>, tensor<3xui32>) The error message indicates that: Metal only supports MPSDataTypeFloat16, MPSDataTypeBFloat16, MPSDataTypeFloat32, MPSDataTypeInt32, and MPSDataTypeInt64. The use of ui32 seems to be incompatible with Metal’s allowed types. I’m trying to understand if the ui32 output is the problem or maybe the use of rng_bit_generator is wrong. Could you clarify if there is a workaround or planned support for ui32 output in this context? Alternatively, guidance on configuring rng_bit_generator for compatibility with Metal’s supported types would be greatly appreciated.
0
0
107
1w
Example Usage of sliceUpdateDataTensor
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates. func sliceUpdateDataTensor( _ dataTensor: MPSGraphTensor, update updateTensor: MPSGraphTensor, starts: [NSNumber], ends: [NSNumber], strides: [NSNumber], startMask: UInt32, endMask: UInt32, squeezeMask: UInt32, name: String? ) -> MPSGraphTensor
0
0
140
1w
Help with TensorFlow to CoreML Conversion: AttributeError: 'float' object has no attribute 'astype'
Hello, I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference: AttributeError: 'float' object has no attribute 'astype' Here is the relevant part of the error traceback: File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value return input_var.val.astype(dtype=type_map[dtype_val]) I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x. Has anyone encountered a similar issue or can offer suggestions on how to resolve this? I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed. Platform and package versions: coremltools 6.1 tensorflow 2.10.0 tensorflow-estimator 2.10.0 tensorflow-hub 0.16.1 tensorflow-io-gcs-filesystem 0.37.1 Python 3.10.12 pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10) Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64 Any help or pointers would be greatly appreciated!
1
0
168
1w
Issues with Statsmodels
When I import starts models in Jupyter notebook, I ge the following error: ImportError: dlopen(/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib Referenced from: &lt;5ACBAA79-2387-3BEF-9F8E-6B7584B0F5AD&gt; /opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so Reason: tried: '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache). What should I do?
1
0
170
3w
NLtagger not filtering words such as "And, to, a, in"
what am I not understanding here. in short the view loads text from the jsons descriptions and then should filter out the words. and return and display a list of most used words, debugging shows words being identified by the code but does not filter them out private func loadWordCounts() { DispatchQueue.global(qos: .background).async { let fileManager = FileManager.default guard let documentsDirectory = try? fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) else { return } let descriptions = loadDescriptions(fileManager: fileManager, documentsDirectory: documentsDirectory) var counts = countWords(in: descriptions) let tagsToRemove: Set<NLTag> = [ .verb, .pronoun, .determiner, .particle, .preposition, .conjunction, .interjection, .classifier ] for (word, _) in counts { let tagger = NLTagger(tagSchemes: [.lexicalClass]) tagger.string = word let (tag, _) = tagger.tag(at: word.startIndex, unit: .word, scheme: .lexicalClass) if let unwrappedTag = tag, tagsToRemove.contains(unwrappedTag) { counts[word] = 0 } } DispatchQueue.main.async { self.wordCounts = counts } } }
0
0
170
3w
Playground (early access)
Is it just me or is early access image playground not available, been waiting for a little over 24hrs and still no access. (no rush for the team if there’s smth wrong) they might be busy rolling out the first few apple intelligence features (ios 18.1) public release.
4
2
1.4k
3w
Keras 3 and Tensorflow GPU does not have support on apple silicon
hi, I am currently running LSTM on TensorFlow. However, when i switched from keras2 to keras3. code running time has increased 10 times -- it seems there is no GPU acceleration. Here is my code: batch size = 256 optimiser = adam activation = tanh _______________________________________________ Layer (type) Output Shape Param # ============================================= input_1 (InputLayer) [(None, 7, 16)] 0 bidirectional (Bidirection (None, 7, 320) 226560 al) bidirectional_1 (Bidirecti (None, 7, 512) 1181696 onal) bidirectional_2 (Bidirecti (None, 256) 656384 onal) dense (Dense) (None, 1) 257 ============================================== Total params: 2064897 (7.88 MB) Trainable params: 2064897 (7.88 MB) Non-trainable params: 0 (0.00 Byte) ______________________________________________ This is keras 3.6.0 + tensorflow 2.17.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 28/681 ━━━━━━━━━━━━━━━━━━━━ 8:13 756ms/step - loss: 0.5901 - mape: 338.6876 - mse: 0.8591 This is keras 2.14.0 + tensorflow 2.14.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 681/681 [==============================] - 37s 49ms/step - loss: 3.6345 - mape: 499038.7500 - mse: 34.4148 - val_loss: 3.5452 - val_mape: 41.7964 - val_mse: 32.0133 - lr: 0.0010 Is that because keras3 has no GPU support on macos? Apart from that, if I change LSTM activation from tanh to sigmoid in keras2, it does not have GPU support as well. My system is 15.0.1 and the code was running on python3.11 I am not sure why these happen. Thanks
2
0
326
4w
Unable to Get Result from DetectHorizonRequest - Result is nil
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code: func processHorizonImage(_ ciImage: CIImage) async { let request = DetectHorizonRequest() do { let result = try await request.perform(on: ciImage) print(result) } catch { print(error) } } After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following: The input CIImage is valid and contains a visible horizon. No errors are being thrown. The relevant frameworks are properly imported. Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue. Thank you for your support! This is the image
0
0
210
Oct ’24
Integer arithmetic with Accelerate
Almost all the functions in Accelerate are for single precision (Float) and double precision (Double) operations. However, I stumbled upon three integer arithmetic functions which operate on Int32 values. Are there any more functions in Accelerate that operate on integer values? If not, then why aren't there more functions that work with integers?
1
0
175
Oct ’24
New Vision API
Hey everyone, I've been updating my code to take advantage of the new Vision API for text recognition in macOS 15. I'm noticing some very odd behavior though, it seems like in general the new Vision API consistently produces worse results than the old API. For reference here is how I'm setting up my request. var request = RecognizeTextRequest() request.recognitionLevel = getOCRMode() // generally accurate request.usesLanguageCorrection = !disableLanguageCorrection // generally true request.recognitionLanguages = language.split(separator: ",").map { Locale.Language(identifier: String($0)) } // generally 'en' let observations = try? await request.perform(on: image) as [RecognizedTextObservation] Then I will process the results and just get the top candidate, which as mentioned above, typically is of worse quality then the same request formed with the old API. Am I doing something wrong here?
0
0
208
Oct ’24
Vision framework OCR missing Swedish support?
WWDC 2024 mentioned that the OCR feature from the Vision framework has support for "Korean, Swedish, and Chinese", but the Swedish support does not seem to be available... Running either print(try? VNRecognizeTextRequest().supportedRecognitionLanguages()) or var ocrRequest = RecognizeTextRequest(.revision3) print(ocrRequest.supportedRecognitionLanguages) did not print out Swedish as one of the supported languages, but Korean and Chinese are. Tested on early versions of iOS 18 developer beta, and the latest version of iOS 18.1 (22B5054e).
1
0
285
Oct ’24
Kernel dying issue after installing tensorflow
I was working on my project and when I tried to train a model the kernel crashed, so I restarted the kernel and tried the same and still I got the same crashing issue. Then I read one of the thread having the same issue where the apple support was saying to install tensorflow-macos and tensorflow-metal and read the guide from this site: https://developer.apple.com/metal/tensorflow-plugin/ and I did so, I tried every single thing and when I tried the test code provided in the site, I got the same error, here's the code and the output. Code: import tensorflow as tf cifar = tf.keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar.load_data() model = tf.keras.applications.ResNet50( include_top=True, weights=None, input_shape=(32, 32, 3), classes=100,) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"]) model.fit(x_train, y_train, epochs=5, batch_size=64) and here's the output: Epoch 1/5 The Kernel crashed while executing code in the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details. And here's the half of log file as it was not fully coming: metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M1 2024-10-06 23:30:49.894405: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 8.00 GB 2024-10-06 23:30:49.894420: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 2.67 GB 2024-10-06 23:30:49.894444: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2024-10-06 23:30:49.894460: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) 2024-10-06 23:30:56.701461: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type GPU is enabled. [libprotobuf FATAL google/protobuf/message_lite.cc:353] CHECK failed: target + size == res: libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: target + size == res: Please respond to this post as soon as possible as I am working on my project now and getting this error again n again. Device: Apple MacBook Air M1.
0
0
311
Oct ’24
Many inputs to `MPSNNGraph::encodeBatchToCommandBuffer`
I understand we can use MPSImageBatch as input to [MPSNNGraph encodeBatchToCommandBuffer: ...] method. That being said, all inputs to the MPSNNGraph need to be encapsulated in a MPSImage(s). Suppose I have an machine learning application that trains/infers on thousands of input data where each input has 4 feature channels. Metal Performance Shaders is chosen as the primary AI backbone for real-time use. Due to the nature of encodeBatchToCommandBuffer method, I will have to create a MTLTexture first as a 2D texture array. The texture has pixel width of 1, height of 1 and pixel format being RGBA32f. The general set up will be: #define NumInputDims 4 MPSImageBatch * infBatch = @[]; const uint32_t totalFeatureSets = N; // Each slice is 4 (RGBA) channels. const uint32_t totalSlices = (totalFeatureSets * NumInputDims + 3) / 4; MTLTextureDescriptor * descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: MTLPixelFormatRGBA32Float width: 1 height: 1 mipmapped: NO]; descriptor.textureType = MTLTextureType2DArray descriptor.arrayLength = totalSlices; id<MTLTexture> texture = [mDevice newTextureWithDescriptor: descriptor]; // bytes per row is `4 * sizeof(float)` since we're doing one pixel of RGBA32F. [texture replaceRegion: MTLRegionMake3D(0, 0, 0, 1, 1, totalSlices) mipmapLevel: 0 withBytes: inputFeatureBuffers[0].data() bytesPerRow: 4 * sizeof(float)]; MPSImage * infQueryImage = [[MPSImage alloc] initWithTexture: texture featureChannels: NumInputDims]; infBatch = [infBatch arrayByAddingObject: infQueryImage]; The training/inference will be: MPSNNGraph * mInferenceGraph = /*some MPSNNGraph setup*/; MPSImageBatch * returnImage = [mInferenceGraph encodeBatchToCommandBuffer: commandBuffer sourceImages: @[infBatch] sourceStates: nil intermediateImages: nil destinationStates: nil]; // Commit and wait... // Read the return image for the inferred result. As you can see, the setup is really ad hoc - a lot of 1x1 pixels just for this sole purpose. Is there any better way I can achieve the same result while still on Metal Performance Shaders? I guess a further question will be: can MPS handle general machine learning cases other than CNN? I can see the APIs are revolved around convolution network, both from online documentations and header files. Any response will be helpful, thank you.
0
0
263
Oct ’24