Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

81 Posts
Sort by:
Post not yet marked as solved
1 Replies
519 Views
I converted a toy Pytorch regression model to CoreML mlmodel using coremltools and set it to be updatable with mean_squared_error_loss. But when testing the training, the context.metrics[.lossValue] can give negative value which is impossible. Further more, context.metrics[.lossValue] result is very different from my own computed training loss as shown in the screenshot attached. I was wondering if I used a wrong way to extract the training loss from context? Does context.metrics[.lossValue] really give MSE if I used coremltools function set_mean_squared_error_loss to set the loss? Any suggestion is appreciated. Since the validation loss decreases as epoch goes, the model should be indeed updated correctly. I am using coremltools==7.0, xcode==15.0.1 Here is my code to convert Pytorch model to updatable CoreML model: import coremltools from coremltools.models.neural_network import NeuralNetworkBuilder, SgdParams, AdamParams from coremltools.models import datatypes # Load the model specification spec = coremltools.utils.load_spec('regression.mlmodel') builder = NeuralNetworkBuilder(spec=spec) builder.inspect_output_features() # Name: linear_1 # Make layers updatable builder.make_updatable(['linear_0', 'linear_1']) # Manually add a mean squared error loss layer feature = ('linear_1', datatypes.Array(1)) builder.set_mean_squared_error_loss(name='lossLayer', input_feature=feature) # define the optimizer (Adam in this example) adam_params = AdamParams(lr=0.01, beta1=0.9, beta2=0.999, eps=1e-8, batch=16) builder.set_adam_optimizer(adam_params) # Set the number of epochs builder.set_epochs(100) # Save the updated model updated_model = coremltools.models.MLModel(spec) updated_model.save('updatable_regression30.mlmodel') Here is the code I use to try to update the saved updatable_regression30.mlmodel: import CoreML import GameKit func generateSampleData(numSamples: Int, seed: UInt64) -> ([MLMultiArray], [MLMultiArray]) { // simple regression: y = 10 * sum(x) + 1 var inputArray = [MLMultiArray]() var outputArray = [MLMultiArray]() // Create a random number generator with a fixed seed let randomSource = GKLinearCongruentialRandomSource(seed: seed) let randomDistribution = GKRandomDistribution(randomSource: randomSource, lowestValue: 0, highestValue: 1000) for _ in 0..<numSamples { do { let input = try MLMultiArray(shape: [1, 2], dataType: .float32) let output = try MLMultiArray(shape: [1], dataType: .float32) var sumInput: Float = 0 for i in 0..<input.shape[1].intValue { // Generate random value using the fixed seed generator let inputValue = Float(randomDistribution.nextInt()) / 1000.0 input[[0, i] as [NSNumber]] = NSNumber(value: inputValue) sumInput += inputValue } output[0] = NSNumber(value: 10.0 * sumInput + 1.0) inputArray.append(input) outputArray.append(output) } catch { print("Error occurred while creating MLMultiArrays: \(error)") } } return (inputArray, outputArray) } func computeLoss(model: MLModel, data: ([MLMultiArray], [MLMultiArray])) -> Double { let (inputData, outputData) = data var totalLoss: Double = 0 for (index, input) in inputData.enumerated() { let output = outputData[index] if let prediction = try? model.prediction(from: MLDictionaryFeatureProvider(dictionary: ["x": MLFeatureValue(multiArray: input)])), let predictedOutput = prediction.featureValue(for: "linear_1")?.multiArrayValue { let loss = (output[0].doubleValue - predictedOutput[0].doubleValue) totalLoss += loss * loss // squared error } } return totalLoss / Double(inputData.count) // mean of squared errors } func trainModel() { // Load the updatable model guard let updatableModelURL = Bundle.main.url(forResource: "updatable_regression30", withExtension: "mlmodelc") else { print("Failed to load the updatable model") return } // Generate sample data let (inputData, outputData) = generateSampleData(numSamples: 200, seed: 8) let validationData = generateSampleData(numSamples: 100, seed:18) // Create an MLArrayBatchProvider from the sample data var featureProviders = [MLFeatureProvider]() for (index, input) in inputData.enumerated() { let output = outputData[index] let dataPointFeatures: [String: MLFeatureValue] = [ "x": MLFeatureValue(multiArray: input), "linear_1_true": MLFeatureValue(multiArray: output) ] if let provider = try? MLDictionaryFeatureProvider(dictionary: dataPointFeatures) { featureProviders.append(provider) } } let batchProvider = MLArrayBatchProvider(array: featureProviders) // Define progress handlers let progressHandlers = MLUpdateProgressHandlers(forEvents: [.trainingBegin, .epochEnd], progressHandler: { context in switch context.event { case .trainingBegin: print("Training began.") case .epochEnd: let loss = context.metrics[.lossValue] as! Double let validationLoss = computeLoss(model: context.model, data: validationData) let computedTrainLoss = computeLoss(model: context.model, data: (inputData, outputData)) print("Epoch \(context.metrics[.epochIndex]!) ended. Training Loss: \(loss), Computed Training Loss: \(computedTrainLoss), Validation Loss: \(validationLoss)") default: break } } ) // Create an update task with progress handlers let updateTask = try! MLUpdateTask(forModelAt: updatableModelURL, trainingData: batchProvider, configuration: nil, progressHandlers: progressHandlers) // Start the update task updateTask.resume() } // call trainModel() to start training
Posted
by bcxiao.
Last updated
.
Post not yet marked as solved
1 Replies
505 Views
Objective: I am in the process of developing an application that utilizes machine learning (Core ML) to interact with photographs of documents, specifically focusing on those containing tables. Step 1: Capturing the Image The application will initiate by allowing users to take photos of documents. The key here is not just any part of the document, but specifically the sections where tables are present. Step 2: Image Analysis through Machine Learning Upon capturing the image, the next phase involves a machine learning model. Using Apple's Create ML tool with Swift, the application will analyze the image. The model's task is two-fold: Identifying the Table: Distinguish the table from other document information, ensuring it recognizes and isolates the table structure within the photograph. Ignoring Irrelevant Information: Concurrently, the model will disregard all non-table content, focusing the application's resources on the table data. Step 3: Data Extraction and Training Once the table is identified, the real work begins. The application will engage in detailed scrutiny, where it's trained to understand and recognize row and column data based on specific datasets. This training will enable the application to 'read' the table accurately, much like a human would, by identifying the organization of information into rows and columns. Step 4: Information Storage Post-analysis, the application will extract this critical data, storing it in a structured format. Each piece of identifiable information from the rows and columns will be systematically organized into a Dictionary or an Object. This structure is not just for immediate use but also efficient for future data operations within the app. Conclusion: Through these sequential steps, the application transitions from merely capturing an image to intelligently recognizing, deciphering, and storing table data from within a physical document. This streamlined process is all courtesy of integrating machine learning into the app's functionality, promising significant efficiency and accuracy in data handling. Realistically, I have not found any good examples out there so I am attempting to create my own ML (with no experience 😅), so any guidance or help would be very much appreciated.
Posted Last updated
.
Post not yet marked as solved
2 Replies
506 Views
I followed the video of Composing advanced models with Create ML Components. I have created the model with let urlParameter = URL(fileURLWithPath: "/path/to/model.pkg") let (training, validation) = dataFrame.randomSplit(by: 0.8) let model = try await transformer.fitted(to: DataFrame(training), validateOn: DataFrame(validation)) { event in guard let tAccuracy = event.metrics[.trainingAccuracy] as? Double else { return } print(tAccuracy) } try transformer.write(model, to: url) print("done") Next goal is to read the model and update it with new dataFrame let urlCSV = URL(fileURLWithPath: "path/to/newData.csv") var model = try transformer.read(from: urlParameters) // loading created model let newDataFrame = try DataFrame(contentsOfCSVFile: urlCSV ) // new dataFrame with features and annotations try await transformer.update(&model, with: newDataFrame) // I want to keep previous learned data and update the model with new try transformer.write(model, to: urlParameters) // the model saves but the only last added dataFrame are saved. Previous one just replaced with new one But looks like I only replace old data with new one. **The Question ** How can add new data to model I created without losing old one ?
Posted
by griffenk.
Last updated
.
Post not yet marked as solved
2 Replies
662 Views
My app allows the user to select different stable diffusion models, and I noticed a very strange issue concerning memory management. When using the StableDiffusionPipeline (https://github.com/apple/ml-stable-diffusion) with cpu+gpu, around 1.5 GB of memory is not properly released after generateImages is called and the pipeline is released. When generating more images with a new StableDiffusionPipeline object, memory is reused and stays stable at around 1.5 GB after inference is complete. Everything, especially MLModels, are released properly. Guessing, MLModel seems to create a persistent cache. Here is the problem: When using a different MLModel afterwards, another 1.5 GB is not released and stays resident. Using a third model, this totales to 4.5 GB of unreleased, persistent memory. At first I thought that would be a bug in the StableDiffusionPipeline – but I was able to reproduce this behaviour in a very minimal objective-c sample without ARC: MLArrayBatchProvider *batchProvider = [[MLArrayBatchProvider alloc] initWithFeatureProviderArray:@[<VALID FEATURE PROVIDER>]]; MLModelConfiguration *config = [[MLModelConfiguration alloc] init]; config.computeUnits = MLComputeUnitsCPUAndGPU; MLModel *model = [[MLModel modelWithContentsOfURL:[NSURL fileURLWithPath:<VALID PATH TO .mlmodelc SD 1.5 FILE>] configuration:config error:&error] retain]; id<MLBatchProvider> returnProvider = [model predictionsFromBatch:batchProvider error:&error]; [model release]; [config release]; [batchProvider release]; After running this minimal code, 1.5 GB of persistent memory is present that is not released during the lifetime of the app. This only happens on macOS 14(.1) Sonoma and on iOS 17(.1), but not on macOS 13 Ventura. On Ventura, everything works as expected and the memory is released when predictionsFromBatch: is done and the model is released. Some observations: This only happens using cpu+gpu, not cpu+ane (since the memory is allocated out of process) and not using cpu-only It does not matter which stable diffusion model is used, I tried custom sd-derived models as well as the apple-provided sd 1.5 models I reproduced the issue on MBP 16" M1 Max with macOS 14.1, iPhone 12 mini with iOS 17.0.3 and iPad Pro M2 with iPadOS 17.1 The memory that "leaks" are mostly huge malloc block of 100-500 MB of size OR IOSurfaces This memory is allocated during predictionsFromBatch, not while loading the model Loading and unloading a model does not leak memory – only when predictionsFromBatch is called, the huge memory chunk is allocated and never freed during the lifetime of the app Does anybody have any clue what is going on? I highly suspect that I am missing something crucial, but my colleagues and me looked everywhere trying to find a method of releasing this leaked/cached memory.
Posted
by MendelK.
Last updated
.
Post not yet marked as solved
0 Replies
461 Views
Hi, When I try to train resnet-50 with tensorflow-metal I found the l2 regularizer makes each epoch take almost 4x as long (~220ms instead of 60ms). I'm on a M1 Max 16" MBP. It seems like regularization shouldn't add that much time, is there anything I can do to make it faster? Here's some sample code that reproduces the issue: import tensorflow as tf from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, ZeroPadding2D,\ Flatten, BatchNormalization, AveragePooling2D, Dense, Activation, Add from tensorflow.keras.regularizers import l2 from tensorflow.keras.models import Model from tensorflow.keras import activations import random import numpy as np random.seed(1234) np.random.seed(1234) tf.random.set_seed(1234) batch_size = 64 (train_im, train_lab), (test_im, test_lab) = tf.keras.datasets.cifar10.load_data() train_im, test_im = train_im/255.0 , test_im/255.0 train_lab_categorical = tf.keras.utils.to_categorical( train_lab, num_classes=10, dtype='uint8') train_DataGen = tf.keras.preprocessing.image.ImageDataGenerator() train_set_data = train_DataGen.flow(train_im, train_lab, batch_size=batch_size, shuffle=False) # Change this to l2 for it to train much slower regularizer = None # l2(0.001) def res_identity(x, filters): x_skip = x f1, f2 = filters x = Conv2D(f1, kernel_size=(1, 1), strides=(1, 1), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) x = Conv2D(f1, kernel_size=(3, 3), strides=(1, 1), padding='same', use_bias=False, kernel_regularizer=regularizer)(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) x = Conv2D(f2, kernel_size=(1, 1), strides=(1, 1), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x) x = BatchNormalization()(x) x = Add()([x, x_skip]) x = Activation(activations.relu)(x) return x def res_conv(x, s, filters): x_skip = x f1, f2 = filters x = Conv2D(f1, kernel_size=(1, 1), strides=(s, s), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) x = Conv2D(f1, kernel_size=(3, 3), strides=(1, 1), padding='same', use_bias=False, kernel_regularizer=regularizer)(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) x = Conv2D(f2, kernel_size=(1, 1), strides=(1, 1), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x) x = BatchNormalization()(x) x_skip = Conv2D(f2, kernel_size=(1, 1), strides=(s, s), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x_skip) x_skip = BatchNormalization()(x_skip) x = Add()([x, x_skip]) x = Activation(activations.relu)(x) return x input = Input(shape=(train_im.shape[1], train_im.shape[2], train_im.shape[3]), batch_size=batch_size) x = ZeroPadding2D(padding=(3, 3))(input) x = Conv2D(64, kernel_size=(7, 7), strides=(2, 2), use_bias=False)(x) x = BatchNormalization()(x) x = Activation(activations.relu)(x) x = MaxPooling2D((3, 3), strides=(2, 2))(x) x = res_conv(x, s=1, filters=(64, 256)) x = res_identity(x, filters=(64, 256)) x = res_identity(x, filters=(64, 256)) x = res_conv(x, s=2, filters=(128, 512)) x = res_identity(x, filters=(128, 512)) x = res_identity(x, filters=(128, 512)) x = res_identity(x, filters=(128, 512)) x = res_conv(x, s=2, filters=(256, 1024)) x = res_identity(x, filters=(256, 1024)) x = res_identity(x, filters=(256, 1024)) x = res_identity(x, filters=(256, 1024)) x = res_identity(x, filters=(256, 1024)) x = res_identity(x, filters=(256, 1024)) x = res_conv(x, s=2, filters=(512, 2048)) x = res_identity(x, filters=(512, 2048)) x = res_identity(x, filters=(512, 2048)) x = AveragePooling2D((2, 2), padding='same')(x) x = Flatten()(x) x = Dense(10, activation='softmax', kernel_initializer='he_normal')(x) model = Model(inputs=input, outputs=x, name='Resnet50') opt = tf.keras.optimizers.legacy.SGD(learning_rate = 0.01) model.compile(loss=tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE), optimizer=opt) model.fit(x=train_im, y=train_lab_categorical, batch_size=batch_size, epochs=150, steps_per_epoch=train_im.shape[0]/batch_size)
Posted Last updated
.
Post not yet marked as solved
9 Replies
9.9k Views
Hi everyone, I found that the performance of GPU is not good as I expected (as slow as a turtle), I wanna switch from GPU to CPU. but mlcompute module cannot be found, so wired. The same code ran on colab and my computer (jupyter lab) take 156s vs 40 minutes per epoch, respectively. I only used a small dataset (a few thousands of data points), and each epoch only have 20 baches. I am so disappointing and it seems like the "powerful" GPU is a joke. I am using 12.0.1 macOS and the version of tensorflow-macos is 2.6.0 Can anyone tell me why this happens?
Posted Last updated
.
Post not yet marked as solved
6 Replies
6.3k Views
Hi, I installed skearn successfully and ran the MINIST toy example successfully. then I started to run my project. The finning thing everything seems good at the start point (at least no ImportError occurs). but when I made some changes of my code and try to run all cells (I use jupyter lab) again, ImportError occurs..... ImportError: dlopen(/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib   Referenced from: /Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so   Reason: tried: '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file) then I have to uninstall scipy, sklearn, etc and reinstall all of them. and my code can be ran again..... Magically I hate to say, anyone knows how to permanently solve this problem? make skearn more stable?
Posted Last updated
.
Post not yet marked as solved
0 Replies
494 Views
I enrolled myself in Data Science bachelor's degree. As per curriculum, they only focus on Python. Swift & Xcode is not a part of curriculum. I am wondering if I will be able to apply all that I will learn in my degree like Data Analysis, Machine Learning etc in Xcode. As I am learning Swift, I am liking it. It's easy and more clear. I would like to do Data analytics and build models in Xcode in future. Will it be possible?
Posted Last updated
.
Post not yet marked as solved
4 Replies
615 Views
Hi. I have followed the instructions here to install tensorflow with GPU support for my 16inch 2019 intel macbook pro (with AMD graphic). The installation process seems to be successful (I get no error) but, when I try to test it, after running import tensorflow as tf I get the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow/__init__.py", line 445, in <module> _ll.load_library(_plugin_dir) File "/Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow16TensorShapeProtoC1ERKS0_ Referenced from: <C62E0AB4-567E-3E14-8F96-9F07A746C4DC> /Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <0B1F231A-6766-3F61-81D9-6782129807A9> /Users/mahonik/.virtualenvs/tf-metal-new/lib/python3.11/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so My env's packages ... numpy 1.26.1 tensorboard 2.14.1 tensorboard-data-server 0.7.1 tensorflow 2.14.0 tensorflow-estimator 2.14.0 tensorflow-io-gcs-filesystem 0.34.0 tensorflow-metal 1.0.0 ...
Posted Last updated
.
Post not yet marked as solved
5 Replies
694 Views
I had code that ran 7x faster in Ventura compared to how it runs now in Sonoma. For the basic model training I used let pmst = MLBoostedTreeRegressor.ModelParameters(validation: .split(strategy: .automatic),maxIterations:10000) let model = try MLBoostedTreeRegressor(trainingData: trainingdata, targetColumn: columntopredict, parameters: pmst) Which took around 2 secs in Ventura and now takes between 10 and 14 seconds in Sonoma I have tried to investigate why, and have noticed that when I use I see these results useWatchSPIForScribble: NO, allowLowPrecisionAccumulationOnGPU: NO, allowBackgroundGPUComputeSetting: NO, preferredMetalDevice: (null), enableTestVectorMode: NO, parameters: (null), rootModelURL: (null), profilingOptions: 0, usePreloadedKey: NO, trainWithMLCompute: NO, parentModelName: , modelName: Unnamed_Model, experimentalMLE5EngineUsage: Enable, preparesLazily: NO, predictionConcurrencyHint: 0, Why is the preferred Metal Device null? If I do let devices = MTLCopyAllDevices() for device in devices { config.preferredMetalDevice = device print(device.name) } I can see that the M1 chipset is available but not selected (from reading the literature the default should be nil?) Is this the reason why it is so slow? Is there a way to force a change in the config or elsewhere? Why has the default changed, if it has?
Posted Last updated
.
Post not yet marked as solved
7 Replies
1.7k Views
Hello - I have been struggling to find a solution online and I hope you can help me timely. I have installed the latest tesnorflow and tensorflow-metal, I even went to install the ternsorflow-nightly. My app generates the following as a result of my fit function on a CNN model with 8 layers. 2023-09-29 22:21:06.115768: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M1 Pro 2023-09-29 22:21:06.115846: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 16.00 GB 2023-09-29 22:21:06.116048: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 5.33 GB 2023-09-29 22:21:06.116264: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2023-09-29 22:21:06.116483: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) Most importantly, the learning process is very slow and I'd like to take advantage of al the new features of the latest versions. What can I do?
Posted
by erezkatz.
Last updated
.
Post not yet marked as solved
0 Replies
880 Views
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
Posted
by mrdbourke.
Last updated
.
Post not yet marked as solved
41 Replies
29k Views
Device: MacBook Pro 16 M1 Max, 64GB running MacOS 12.0.1. I tried setting up GPU Accelerated TensorFlow on my Mac using the following steps: Setup: XCode CLI / Homebrew/ Miniforge Conda Env: Python 3.9.5 conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal brew install libjpeg conda install -y matplotlib jupyterlab In Jupyter Lab, I try to execute this code: from tensorflow.keras import layers from tensorflow.keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) model.summary() The code executes, but I get this warning, indicating no GPU Acceleration can be used as it defaults to a 0MB GPU. Error: Metal device set to: Apple M1 Max 2021-10-27 08:23:32.872480: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2021-10-27 08:23:32.872707: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) Anyone has any idea how to fix this? I came across a bunch of posts around here related to the same issue but with no solid fix. I created a new question as I found the other questions less descriptive of the issue, and wanted to comprehensively depict it. Any fix would be of much help.
Posted Last updated
.
Post not yet marked as solved
0 Replies
436 Views
I did add this word "sactional" in speechRequest.contextualStrings but when speaking it always autocorrects to sectional even i tried with training model for speech by adding SFCustomLanguageModelData.PhraseCount(phrase: "sactional", count: 10) and generating a model but didnt work either. is there a better way to make it work ?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.8k Views
I'm interested in using CatBoost and XGBoost for some machine learning projects on my Mac, and I was wondering if it's possible to run these algorithms on my GPU(s) to speed up training times. I have a Mac with an AMD Radeon Pro 5600M and an Intel UHD Graphics 630 GPUs, and I'm running macOS Ventura 13.2.1. I've read that both CatBoost and XGBoost support GPU acceleration, but I'm not sure if this is possible on my system. Can anyone point me in the right direction for getting started with GPU-accelerated CatBoost/XGBoost on macOS? Are there any specific drivers or tools I need to install, or any other considerations I should be aware of? Thank you.
Posted Last updated
.
Post not yet marked as solved
1 Replies
877 Views
In the video of Explore Natural Language multilingual models https://developer.apple.com/videos/play/wwdc2023/10042/, it's said at 6:24 that there are three models. I wonder if it is possible to find semantic similairity between models? For example English and Japanese belong to different models(Latin and CJK), can we compare the vector produced from the different models to find out if two sentences have similar meanings?
Posted
by oldhuhu.
Last updated
.
Post not yet marked as solved
12 Replies
4.0k Views
Hello, I'm new using CoreML and I'm trying to do a test app with the models that already exist. I'm having next error at the moment to classifier the image: [coreml] Failed to get the home directory when checking model path. I would like to receive your help to solve this error. Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
364 Views
Hello, I am Pieter Bikkel. I study Software Engineering at the HAN, University of Applied Sciences, and I am working on an app that can recognize volleyball actions using Machine Learning. A volleyball coach can put an iPhone on a tripod and analyze a volleyball match. For example, where the ball always lands in the field, how hard the ball is served. I was inspired by this session and wondered if I could interview one of the experts in this field. This would allow me to develop my App even better. I hope you can help me with this.
Posted
by pbikkel.
Last updated
.