Integrate machine learning models into your app using Core ML.

Core ML Documentation

Post

Replies

Boosts

Views

Activity

Custom Model Not Working Correctly in the Application #56
I created a model that classifies certain objects using yolov8. I noticed that the model is not working properly in my application. While the model works fine in Xcode preview, in the application it either returns the same result with 99% accuracy for each classification or does not provide any result. In Preview it looks like this: Predictions: extension CameraVC : AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) { guard let data = photo.fileDataRepresentation() else { return } guard let image = UIImage(data: data) else { return } guard let cgImage = image.cgImage else { fatalError("Unable to create CIImage") } let handler = VNImageRequestHandler(cgImage: cgImage,orientation: CGImagePropertyOrientation(image.imageOrientation)) DispatchQueue.global(qos: .userInitiated).async { do { try handler.perform([self.viewModel.detectionRequest]) } catch { fatalError("Failed to perform detection: \(error)") } } lazy var detectionRequest: VNCoreMLRequest = { do { let model = try VNCoreMLModel(for: bestv720().model) let request = VNCoreMLRequest(model: model) { [weak self] request, error in self?.processDetections(for: request, error: error) } request.imageCropAndScaleOption = .centerCrop return request } catch { fatalError("Failed to load Vision ML model: \(error)") } }() This is where i print recognized objects: func processDetections(for request: VNRequest, error: Error?) { DispatchQueue.main.async { guard let results = request.results as? [VNRecognizedObjectObservation] else { return } var label = "" var all_results = [] var all_confidence = [] var true_results = [] var true_confidence = [] for result in results { for i in 0...results.count{ all_results.append(result.labels[i].identifier) all_confidence.append(result.labels[i].confidence) for confidence in all_confidence { if confidence as! Float > 0.7 { true_results.append(result.labels[i].identifier) true_confidence.append(confidence) } } } label = result.labels[0].identifier } print("True Results " , true_results) print("True Confidence ", true_confidence) self.output?.updateView(label:label) } } I converted the model like this: from ultralytics import YOLO model = YOLO(model_path) model.export(format='coreml', nms=True, imgsz=[720,1280])
2
1
661
Jun ’24
PyTorch to CoreML Model inaccuracy
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2. The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right). Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models. P.S. While converting I also got the following warning. : TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error: Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
2
0
1.3k
Jun ’24
Loading CoreML model increases app size?
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal. However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb! I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect. I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb. Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container?? This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml. Thanks in advance for any help, i am totally baffled by this behaviour
6
3
1.1k
May ’24
The CoreML runtime is inconsistent.
for (int i = 0; i < 1000; i++){ double st_tmp = CFAbsoluteTimeGetCurrent(); retBuffer = [self.enhancer enhance:pixelBuffer error:&error]; double et_tmp = CFAbsoluteTimeGetCurrent(); NSLog(@"[enhance once] %f ms ", (et_tmp - st_tmp) * 1000); } When I run a CoreML model using the above code, I notice that the runtime gradually decreases at the beginning. output: [enhance once] 14.965057 ms [enhance once] 12.727022 ms [enhance once] 12.818098 ms [enhance once] 11.829972 ms [enhance once] 11.461020 ms [enhance once] 10.949016 ms [enhance once] 10.712981 ms [enhance once] 10.367990 ms [enhance once] 10.077000 ms [enhance once] 9.699941 ms [enhance once] 9.370089 ms [enhance once] 8.634090 ms [enhance once] 7.659078 ms [enhance once] 7.061005 ms [enhance once] 6.729007 ms [enhance once] 6.603003 ms [enhance once] 6.427050 ms [enhance once] 6.376028 ms [enhance once] 6.509066 ms [enhance once] 6.452084 ms [enhance once] 6.549001 ms [enhance once] 6.616950 ms [enhance once] 6.471038 ms [enhance once] 6.462932 ms [enhance once] 6.443977 ms [enhance once] 6.683946 ms [enhance once] 6.538987 ms [enhance once] 6.628990 ms ... In most deep learning inference frameworks, there is usually a warmup process, but typically, only the first inference is slower. Why does CoreML have a decreasing runtime at the beginning? Is there a way to make only the first inference time longer, while keeping the rest consistent? I use the CoreML model in the (void)display_pixels:(IJKOverlay *)overlay function.
1
1
655
May ’24
CoreML model using excessive ram during prediction
I have an mlprogram of size 127.2MB it was created using tensorflow and then converted to CoreML. When I request a prediction the amount of memory shoots up to 2-2.5GB every time. I've tried using the optimization techniques in coremltools but nothing seems to work it still shoots up to the same 2-2.5GB of ram every time. I've attached a graph to see it doesn't seem to be a leak as the memory is then going back down.
4
0
884
Apr ’24
PyTorch convert function for op 'intimplicit' not implemented
I am trying to coremltools.converters.convert a traced PyTorch model and I got an error: PyTorch convert function for op 'intimplicit' not implemented I am trying to convert a RVC model from github. I traced the model with torch.jit.trace and it fails. So I traced down the problematic part to the ** layer : https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/modules.py#L188 import torch import coremltools as ct from infer.lib.infer_pack.modules import ** model = **(192, 5, dilation_rate=1, n_layers=16, ***_channels=256, p_dropout=0) model.remove_weight_norm() model.eval() test_x = torch.rand(1, 192, 200) test_x_mask = torch.rand(1, 1, 200) test_g = torch.rand(1, 256, 1) traced_model = torch.jit.trace(model, (test_x, test_x_mask, test_g), check_trace = True) x = ct.TensorType(name='x', shape=test_x.shape) x_mask = ct.TensorType(name='x_mask', shape=test_x_mask.shape) g = ct.TensorType(name='g', shape=test_g.shape) mlmodel = ct.converters.convert(traced_model, inputs=[x, x_mask, g]) I got an error RuntimeError: PyTorch convert function for op 'intimplicit' not implemented. How could I modify the **::forward so it does not generate an intimplicit operator ? Thanks David
1
0
900
Feb ’24
Vision Pro CoreML inference 10x slower than M1 Mac/seems to run on CPU
Have a CoreML model that I run in my app Spatial Media Toolkit which lets you convert 2D photos to Spatial. Running the model on my 13" M1 mac gets 70ms inference. Running the exact same code on my Vision Pro takes 700ms. I'm working on adding video support but Vision Pro inference is feeling impossible due to 700ms per frame (20x realtime for for 30fps! 1 sec of video takes 20 sec!) There's a ModelConfiguration you can provide, and when I force CPU I get the same exact performance. Either it's only running on CPU, the NeuralEngine is throttled, or maybe GPU isn't allowed to help out. Disappointing but also feels like a software issue. Would be curious if anyone else has hit this/have any workarounds
0
0
684
Feb ’24
CoreML Conversion of TensorFlow Keras NN fails on Iris Data set
On tf version 2.11.0. I have tried to follow on a fairly standard NN example in order to convert to a CoreML model. However, I cannot get this to work and I'm not clear where it is going wrong. It would seem to be a fairly standard task - a toy example - and I can't see why the conversion would fail. Any help would be appreciated. I have tried the different approaches listed below, but it seems the conversion should just work. I have also tried running the same code pinned to: tensorflow==2.6.2 scikit-learn==0.19.2 pandas==1.1.1 And get a different sequence of errors. The Python code I used mostly comes form this example: https://lnwatson.co.uk/posts/intro_to_nn/ import pandas as pd import numpy as np import tensorflow as tf import torch from sklearn.model_selection import train_test_split from tensorflow import keras import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1' np.bool = np.bool_ np.int = np.int_ print("tf version", tf.__version__) csv_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' col_names = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width','Class'] df = pd.read_csv(csv_url, names = col_names) labels = df.pop('Class') labels = pd.get_dummies(labels) X = df.values y = labels.values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2) model = keras.Sequential() model.add(keras.layers.Dense(16, activation='relu', input_shape=(4,))) model.add(keras.layers.Dense(3, activation='softmax')) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=12, epochs=200, validation_data=(X_val, y_val)) import coremltools as ct # Pass in `tf.keras.Model` to the Unified Conversion API mlmodel = ct.convert(model, convert_to="mlprogram") # mlmodel = ct.convert(model, source="tensorflow") # mlmodel = ct.convert(model, convert_to="neuralnetwork") # mlmodel = ct.convert( # model, # source="tensorflow", # inputs=[ct.TensorType(name="input")], # outputs=[ct.TensorType(name="output")], # minimum_deployment_target=ct.target.iOS14, # ) When using either of these 3: mlmodel = ct.convert(model, convert_to="mlprogram") mlmodel = ct.convert(model, source="tensorflow") mlmodel = ct.convert(model, convert_to="neuralnetwork") I get: mlmodel2 = ct.convert(model, source="tensorflow") ValueError: Const node 'sequential_5/dense_10/MatMul/ReadVariableOp' cannot have no value ERROR:root:sequential_5/dense_11/BiasAdd/ReadVariableOp:0 ERROR:root:[ 0.34652767 0.16202268 -0.3554725 ] Running TensorFlow Graph Passes: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 28.76 passes/s] Converting Frontend ==> MIL Ops: 8%|█████████████████ | 1/12 [00:00<00:00, 16710.37 ops/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/Documents/CoreML Basic Models/NN_Keras_Iris.py:142 130 import coremltools as ct 131 # Pass in `tf.keras.Model` to the Unified Conversion API 132 # mlmodel = ct.convert(model, convert_to="mlprogram") 133 (...) 140 141 # ct.convert(mymodel(), source="tensorflow") --> 142 mlmodel2 = ct.convert(model, source="tensorflow") 144 mlmodel = ct.convert( 145 model, 146 source="tensorflow", (...) 153 minimum_deployment_target=ct.target.iOS14, 154 ) .... File ~/opt/anaconda3/envs/coreml_env/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/ops.py:430, in Const(context, node) 427 @register_tf_op 428 def Const(context, node): 429 if node.value is None: --> 430 raise ValueError("Const node '{}' cannot have no value".format(node.name)) 431 mode = get_const_mode(node.value.val) 432 x = mb.const(val=node.value.val, mode=mode, name=node.name) ValueError: Const node 'sequential_5/dense_10/MatMul/ReadVariableOp' cannot have no value Second Approach: A different approach I tried was specifying the inout type TensorType. However, when specifying the input and outputs I get a different error. I have tried variations on this initialiser but all produce the same error. The variations revolve around adding input_shape, dtype=np.float32 mlmodel = ct.convert( model, source="tensorflow", inputs=[ct.TensorType(name="input")], outputs=[ct.TensorType(name="output")], minimum_deployment_target=ct.target.iOS14, ) t File ~/opt/anaconda3/envs/coreml_env/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/load.py:106, in <listcomp>(.0) 104 logging.debug(msg.format(outputs)) 105 outputs = outputs if isinstance(outputs, list) else [outputs] --> 106 outputs = [i.split(":")[0] for i in outputs] 107 if _get_version(tf.__version__) < _StrictVersion("1.13.1"): 108 return tf.graph_util.extract_sub_graph(graph_def, outputs) AttributeError: 'TensorType' object has no attribute 'split'
0
0
759
Jan ’24
Run CoreML model crash on VisionPro real device when review by AppStoreConnect
I run a MiDaS CoreML model on the Device. It run well on VisionPro Simulator and iOS RealDevice. But crash on VisionPro device. crash mssage: /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:550: failed assertion `MPSKernel MTLComputePipelineStateCache unable to load function ndArrayConvolution2DA14. Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-01-07.txt Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-00-39.txt
2
0
870
Jan ’24
Color Format Requirements for Input in Apples MLModel of DeepLabV3
I am sending CVPixelBuffers to the input of the DeepLabV3 MLModel. I am of the understanding that it requires pixel color format 32ARGB or 32RGBA. Correct? Can 32BRGA be input? CVPixelBuffers support 32BRGA and OpenCV as well. Please note, I want to use the MLModel as trained. Neither 32RGBA no 32ARGB are supported for type CVPixelBuffer. 32ARGB: An unsupported runtime error occurs with the configuration as follows... func configureOutput() { videoOutput.setSampleBufferDelegate(self, queue: bufferQueue) videoOutput.alwaysDiscardsLateVideoFrames = true videoOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey): kCMPixelFormat_32ARGB]. 32RGBA: "Cannot find 'kCMPixelFormat_32rgba' in scope." The app process: Video captured pixelBuffers are sent to c++ code where openCV operations are done, creating up to 3 smaller Mats which are then converted back into pixel buffers in the Objective-C. These converted PixedBuffer are used in three ways. All are sent to the MLModel for image segmentation to identify people; the files may be sent to the photo library; or may simply be viewed on the screen. I need a color format that can support all these down stream operations/pipelines.
1
0
598
Jan ’24