For example: we use DocKit for birdwatching, so we have an unknown field distance and direction.
Distance = ?
Direction = ?
For example, the rock from which the observation is made. The task is to recognize the number of birds caught in the frame, add a detection frame and collect statistics.
Question:
What is the maximum number of frames processed with custom object recognition?
If not enough, can I do the calculations myself and transfer to DokKit for fast movement?
Create ML
RSS for tagCreate machine learning models for use in your app using Create ML.
Posts under Create ML tag
46 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi everyone, is it possible to use a 3D USDZ file to train a model in Create ML, I see there is an image option but it would be good to use these files from Reality Composer from object capture? Or is this in the works for forthcoming Xcode updates? Many Thanks Stuart
Note: I posted this to the feedback assistant but haven't gotten a response for 3months =( FB13482199
I am trying to train a large image classifier. I have a training run for ~300000 images. Each image has a folder and the file names within the folders are somewhat random. 381 classes. I am on an M2 Pro, Sonoma 14.0 running CreateML Version 5.0 (121.1). I would prefer not to pursue the pytorch/HF -> coremltools route.
CreateML seems to consistently crash ~25000-30000 images in during the feature extraction phase with "Unexpected Error". It does not seem to be due to an out of memory issue. I am looking for some guidance since it seems impossible to debug why this is consistently crashing.
My initial assumption was that it could be due to blank/corrupt files. I do not think that is the case. I also checked if there were any special characters in the data/folders. I wasn't able to go through all, but did try some programatic regex. Don't think this is the case either.
I attached the sysdiagnose results in feedback assistant after the crash happened. I did notice when going into /var/logs there was some write issue saying that Mac had written too much to disk. Note: I also tried Xcode 15.2-beta this time and the associated CoreML version.
My questions:
How can I fix this?
How should I go about debugging CreateML errors in the future?
'Unexpected Error' - where can I go about getting the exact createml logs on my device? This is far too broad of an error statement
Please let me know. As a note, I did successfully train a past model on ~100000 images. I am planning to 10-15x that if this run is successful. Please help, spent a lot of time gathering the extra data and to date have been an occasional power user of createml. Haven't heard back from Apple since December =/. I assume I'm not the only one with this problem, so looking for any instructions to hands on debug and help others. Thx!
Context
So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate.
The problem itself
However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results.
What I expected
Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview.
Notes:
So the way I'm importing the model into playground is just by drag and drop.
I've trained the images using JPEG format.
The test Image is rotated so that it looks vertical using MacOS Finder rotation tool.
I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result.
Swift Playground code
This is the code I'm using.
import UIKit
import Vision
do{
let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration())
let mlModel = model.model
let coreMLModel = try VNCoreMLModel(for: mlModel)
let request = VNCoreMLRequest(model: coreMLModel) { request, error in
guard let results = request.results as? [VNRecognizedObjectObservation] else {
return
}
results.forEach { result in
print(result.labels)
print(result.boundingBox)
}
}
let image = UIImage(named: "TEST_IMAGE.HEIC")!
let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!)
try requestHandler.perform([request])
} catch {
print(error)
}
Additional Notes & Uncertainties
Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format.
Assumption #1
Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct?
Assumption #2
Also makes me asume that the same thing happens when I try to make a prediction?
I am trying to implement a ML model with Core ML in a playground for a Student Challenge project, but I can not get it to work. I have already tried everything I found online but nothing seems to work (the tutorials where posted long time ago). Anyone knows how to do this with Xcode 15 and the most recent updates?
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset).
I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully.
But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax.
Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND.
Does anyone faced this issue?
if I execute this builder.inspect_layers(last=4)
I get this output
[Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND)
Updatable: False
Input blobs: ['sequential/dense_1/MatMul']
Output blobs: ['Identity']
[Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul)
Updatable: False
Input blobs: ['sequential/dense/Relu']
Output blobs: ['sequential/dense_1/MatMul']
[Id: 30], Name: sequential/dense/Relu (Type: activation)
Updatable: False
Input blobs: ['sequential/dense/MatMul']
Output blobs: ['sequential/dense/Relu']
In the make_updatable function when I execute
builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity')
I get this error
ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.