Hi, I am trying to create a multi label image classifier model using CreateML (the one included in Xcode 16.1).
However, my annoations.json file won't get accepted by the app.
I get the following error: annotations.json file contains field "Index 0" that is not of type String
Here is a JSON example which results in said error:
[
{
"image": "image1.jpg",
"annotations": [
{
"label": "car-license-plate",
"coordinates": {
"x": 160, "y": 108, "width": 190, "height": 200
}
}
]
},
{
"image": "image2.jpg",
"annotations": [
{
"label": "car-license-plate",
"coordinates": {
"x": 250, "y": 150, "width": 100, "height": 98
}
}
]
}
]
Create ML
RSS for tagCreate machine learning models for use in your app using Create ML.
Post
Replies
Boosts
Views
Activity
Hi Apple Developer Community,
I’m exploring ways to fine-tune the SNSoundClassifier to allow users of my iOS app to personalize the model by adding custom sounds or adjusting predictions. While Apple’s WWDC session on sound classification explains how to train from scratch, I’m specifically interested in using SNSoundClassifier as the base model and building/fine-tuning on top of it.
Here are a few questions I have:
1. Fine-Tuning on SNSoundClassifier:
Is there a way to fine-tune this model programmatically through APIs? The manual approach using macOS, as shown in this documentation is clear, but how can it be done dynamically - within the app for users or in a cloud backend (AWS/iCloud)?
Are there APIs or classes that support such on-device/cloud-based fine-tuning or incremental learning? If not directly, can the classifier’s embeddings be used to train a lightweight custom layer?
Training is likely computationally intensive and drains too much on battery, doing it on cloud can be right way but need the right apis to get this done. A sample code will do good.
2. Recommended Approach for In-App Model Customization:
If SNSoundClassifier doesn’t support fine-tuning, would transfer learning on models like MobileNetV2, YAMNet, OpenL3, or FastViT be more suitable?
Given these models (SNSoundClassifier, MobileNetV2, YAMNet, OpenL3, FastViT), which one would be best for accuracy and performance/efficiency on iOS? I aim to maintain real-time performance without sacrificing battery life. Also it is important to see architecture retention and accuracy after conversion to CoreML model.
3. Cost-Effective Backend Setup for Training:
Mac EC2 instances on AWS have a 24-hour minimum billing, which can become expensive for limited user requests. Are there better alternatives for deploying and training models on user request when s/he uploads files (training data)?
4. TensorFlow vs PyTorch:
Between TensorFlow and PyTorch, which framework would you recommend for iOS Core ML integration? TensorFlow Lite offers mobile-optimized models, but I’m also curious about PyTorch’s performance when converted to Core ML.
5. Metrics:
Metrics I have in mind while picking the model are these: Publisher, Accuracy, Fine-Tuning capability, Real-Time/Live use, Suitability of iPhone 16, Architectural retention after coreML conversion, Reasons for unsuitability, Recommended use case.
Any insights or recommended approaches would be greatly appreciated.
Thanks in advance!
It has been about 5 days since I sent the request and it still hasn’t been accepted. I am quite disappointed as Apple usually accepts requests within a couple of hours. Please fix this ASAP.
I have been stuck at the “Early Access Requested” for about 48 hours. Usually they take about an hour or less to accept your request but it seems Like this one is very slow, is an issue on my end or Apple’s.
Please let me know if there is a solution.
I'm trying to use the Spatial model to perform Object Tracking on a .usdz file that I create.
After loading the file, which I can view correctly in the console, I start the training.
Initially, I notice that the disk usage on my PC increases. After several GB, the usage stops, but the training progress remains for hours at 0.00% with the message "About 8hr."
How can I understand what the issue is? Has anyone else experienced the same problem?
Thanks
Diego
Hi,
I'm training a model that should detect a forehand and a backend stroke.
The data looks like this:
activity,timestamp,Acceleration_X,Acceleration_Y,Acceleration_Z,Rotation_X,Rotation_Y,Rotation_Z
forehand,0.0,0.08,-0.08,0.03,0.18,0.26,0.32
I can load it in Create ML but it's showing the acceleration and rotation x,y,z as seperate Doubles and not as one feature.
What do I have to change to make this work?
Thank you
Hi,
I'm working on training a createML object detector model; I've run into an issue that has me stumped - when I reach somewhere between 100,000 and 150,000 iterations my model will stop training and error out.
More Details:
CreateML gives me the error prompt that says it is unable to train the model please delete the model source and start from the beginning or duplicate the model and start from the beginning (slightly paraphrased)
I see the following error in the createML console (my user name and UUIDs have been redacted)
Unable to load model from file:///Users/<my user name>/Library/Caches/com.apple.dt.createml/projects/<UUID HERE>/sessions/checkpoint.sessions/<UUID Here>//training-000132500.checkpoint: Cannot open file:///Users/<my user name>/Library/Caches/com.apple.dt.createml/projects/<UUID Here>/sessions/checkpoint.sessions/<uuid here> //training-000132500.checkpoint/dir_archive.ini for read. Cannot open /Users/<my username>/Library/Caches/com.apple.dt.createml/projects/<UUID>/sessions/checkpoint.sessions/<UUID>//training-000132500.checkpoint/dir_archive.ini for reading
I've gone into my Caches in my Library directory and I see each piece of the file path in finder UNTIL the //training-00132500 piece of the path, so I can at least confirm that createML appears to be unable to create or open the file it needs for this training session.
Technology Used:
Xcode 16
Apple M1 Pro
MacOS 14.6.1 (23G93)
I've also verified that Xcode and terminal have full disk permissions in my system preferences - I didn't see an option to add CreateML to this list.
I've also ensured that my createML project and its data sources are not in iCloud and are indeed local on my desktop.
Lastly, I made more space on my machine, so I should have a little over 1 TB of space.
Has anybody experienced this before? Any advice? I am majorly blocked on this issue, so I hope somebody else can help shed some light on this issue!
Thanks!
I'm trying to generate a json for my training data, tried manually first and then tried using roboflow and I still get the same error:
_annotations.createml.json file contains field "Index 0" that is not of type String.
the json format provided by roboflow was
[{"image":"menu1_jpg.rf.44dfacc93487d5049ed82952b44c81f7.jpg","annotations":[{"label":"100","coordinates":{"x":497,"y":431.5,"width":32,"height":10}}]}]
any help would be greatly appreciated
Hi folks, I'm trying to import data to train a model and getting the above error. I'm using the latest Xcode, have double checked the formatting in the annotations file, and used jpgrepair to remove any corruption from the data files. Next step is to try a different dataset, but is this a particular known error? (Or am I doing something obviously wrong?)
2019 Intel Mac, Xcode 15.4, macOS Sonoma 14.1.1
Thanks
getting this error again and again even if I tried reinstalling.
Traceback (most recent call last):
File "", line 1, in
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/init.py", line 439, in
_ll.load_library(_plugin_dir)
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: OBJC_CLASS$_MPSGraphRandomOpDescriptor
Referenced from: /Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
I can successfully train an ActionClassifier using CreateML. However, I get crashes when I attempt to do the same asynchronously.
The model parameters and training data sources are the same in both cases:
let modelParameters = MLActionClassifier.ModelParameters(validation: validationDataSet,batchSize: 5, maximumIterations: 10, predictionWindowSize: 120, targetFrameRate: 30)
let trainingDataSource = MLActionClassifier.DataSource.directoryWithVideosAndAnnotation(at: myStudyParticipantURLFinal, annotationFile: documentURLFinal, videoColumn: "file", labelColumn: "category", startTimeColumn: "startTime", endTimeColumn: "endTime")
the only thing I add to attempt asyncrounous training is sessionParameters:
let sessionDirectory = URL(fileURLWithPath: "(NSHomeDirectory())/test")
// Session parameters can be provided to `train` method.
let sessionParameters = MLTrainingSessionParameters(
sessionDirectory: sessionDirectory,
reportInterval: 10,
checkpointInterval: 100,
iterations: 10
)
To the final method:
let trainJob = try MLActionClassifier.train(trainingData: trainingDataSource, parameters: modelParameters, sessionParameters: sessionParameters)
The job crashes saying it cannot find plist files. I notice that only one plist file is written: meta.plist
It seems there should also be a parameters.plist written, but it is not there.
I dragged a folder containing two subfolders directly into CreateML. One subfolder contains images, and the other contains labeled datasets. The number of files in the labeled dataset matches the number of image files. However, it shows "Missing data for label dianjiaoyise.jsons. Detailed list of labels missing files: ["dianjiaoyise.jsons"]."
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing.
How do I fix this please?
Thanks.
I try to use Create ML Spatial template. but unexpected error is occured in 1-3 minitues. I try some times and same results. Spatial template is not available on an M1 mac ?
My development environment is
Apple M1 Pro
macOS: 15.0
Xcode: 16.0 beta
CreateML: 6.0 beta
We can use the CreateML App to build object tracking model in Xcode 16, but is it possible to use CreateML framework as well?
No documentation of Create ML object tracking is found yet. The latest documentation I can found is Xcode 15.
https://developer.apple.com/documentation/CreateML?changes=latest_minor
Really apricated the new feature of object tracking, thank you Apple Team.
How do I use either of these data sources with MLHandActionClassifierwith on visionOS?
MLHandActionClassifier.DataSource.labeledKeypointsDataFrame
MLHandActionClassifier.DataSource.labeledKeypointsData
visionOS ARKit HandTracking provides us with 27 joints and 3D co-ordinates which differs from the 21 joint, 2D co-ordinates that these two data sources mention in their documentation.
Hello,
I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary.
I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
How do I directly input landmarks to the activity classifier rather than inputting an image/video?
I‘ve created text classification project and selected BERT algorithm With 100 iterations for json file. Json file is valid but training always cancels on 37 iteration…
Because tool does not provide any cancellation reasons I have no clue why it happens. Can I check reasons somehow? Or do anyone knows possible reasons or solutions for this?
Hi everyone, I attempted to use the MultivariateLinearRegressor from the Create ML Components framework to fit some multi-dimensional data linearly (4 dimensions in my example). I aim to obtain multi-dimensional output points (2 points in my example). However, when I fit the model with my training data and test it, it appears that only the first element of my training data is used for training, regardless of whether I use CreateMLComponents.AnnotatedBatch or [CreateMLComponents.AnnotatedFeature, CoreML.MLShapedArray>] as input.
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
Here is a test code to test the function (ios 18.0 beta, Xcode 16.0 beta)
In this example I train the model to learn 2 multidimensional points (4 dimensions) and here are the results of the predictions:
▿ 2 elements
▿ 0 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.20000000298023224 0.699999988079071
▿ _storage : <StandardStorage<Double>: 0x600002ad8270>
▿ annotation : 0.2 0.7
▿ _storage : <StandardStorage<Double>: 0x600002b30600>
▿ 1 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.23158159852027893 0.9509953260421753
▿ _storage : <StandardStorage<Double>: 0x600002ad8c90>
▿ annotation : 0.9 0.1
▿ _storage : <StandardStorage<Double>: 0x600002b55f20>
0.23158159852027893 0.9509953260421753 is totally random and should be far more closer to [0.9,0.1].
Here is the test code : ( i run it on "My mac, Designed for Ipad")
ContentView.swift
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import CoreGraphics
import Accelerate
import Foundation
import CoreML
import CreateML
import CreateMLComponents
func createMLShapedArray(from array: [Double], shape: [Int]) -> MLShapedArray<Double> {
return MLShapedArray<Double>(scalars: array, shape: shape)
}
func calculateTransformationMatrixWithNonlinearity(sourceRGB: [[Double]], referenceRGB: [[Double]], degree: Int = 3) async throws -> MultivariateLinearRegressor<Double>.Model {
let annotatedFeatures2 = zip(sourceRGB, referenceRGB).map { (featureArray, targetArray) -> AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>> in
let featureMLShapedArray = createMLShapedArray(from: featureArray, shape: [featureArray.count])
let targetMLShapedArray = createMLShapedArray(from: targetArray, shape: [targetArray.count])
return AnnotatedFeature(feature: featureMLShapedArray, annotation: targetMLShapedArray)
}
// Flatten the sourceRGBPoly into a single-dimensional array
var flattenedArray = sourceRGB.flatMap { $0 }
let featuresMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 4])
flattenedArray = referenceRGB.flatMap { $0 }
let targetMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 2])
// Create AnnotatedFeature instances
/* let annotatedFeatures2: [AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>>] = [
AnnotatedFeature(feature: featuresMLShapedArray, annotation: targetMLShapedArray)
]*/
let annotatedBatch = AnnotatedBatch(features: featuresMLShapedArray, annotations: targetMLShapedArray)
var regressor = MultivariateLinearRegressor<Double>()
regressor.configuration.learningRate = 0.1
regressor.configuration.maximumIterationCount=5000
regressor.configuration.batchSize=2
let model = try await regressor.fitted(to: annotatedBatch,validateOn: nil)
//var model = try await regressor.fitted(to: annotatedFeatures2)
// Proceed to prediction once the model is fitted
let predictions = try await model.prediction(from: annotatedFeatures2)
// Process or use the predictions
print(predictions)
print("Predictions:", predictions)
return model
}
struct ContentView: View {
var body: some View {
VStack {}
.onAppear {
Task {
do {
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
let model = try await calculateTransformationMatrixWithNonlinearity(sourceRGB: sourceMatrix, referenceRGB: referenceMatrix, degree: 2
)
print("Model fitted successfully:", model)
} catch {
print("Error:", error)
}
}
}
}
}