Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Posts under Create ML tag

50 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

coreML Hand Pose classification: doesn't appear on the camera.
I created a Hand Pose model using CreateML and integrated it into my SwiftUI project app. While coding, I referred to the Apple Developer documentation app for the necessary code. However, when I ran the app on an iPhone 14, the camera didn't display any effects or finger numbers as expected. note: I've already tested the ML model separately, and it works fine. the code: import CoreML import SceneKit import SwiftUI import Vision import ARKit struct ARViewContainer: UIViewControllerRepresentable { let arViewController: ARViewController let model: modelHand func makeUIViewController(context: UIViewControllerRepresentableContext<ARViewContainer>) -> ARViewController { arViewController.model = model return arViewController } func updateUIViewController(_ uiViewController: ARViewController, context: UIViewControllerRepresentableContext<ARViewContainer>) { // Update the view controller if needed } } class ARViewController: UIViewController, ARSessionDelegate { var frameCounter = 0 let handPosePredictionInterval = 10 var model: modelHand! var effectNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() let arView = ARSCNView(frame: view.bounds) view.addSubview(arView) let session = ARSession() session.delegate = self let configuration = ARWorldTrackingConfiguration() configuration.frameSemantics = .personSegmentationWithDepth arView.session.run(configuration) } func session(_ session: ARSession, didUpdate frame: ARFrame) { let pixelBuffer = frame.capturedImage let handPoseRequest = VNDetectHumanHandPoseRequest() handPoseRequest.maximumHandCount = 1 handPoseRequest.revision = VNDetectHumanHandPoseRequestRevision1 let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]) do { try handler.perform([handPoseRequest]) } catch { assertionFailure("Hand Pose Request failed: \(error)") } guard let handPoses = handPoseRequest.results, !handPoses.isEmpty else { return } if frameCounter % handPosePredictionInterval == 0 { if let handObservation = handPoses.first as? VNHumanHandPoseObservation { do { let keypointsMultiArray = try handObservation.keypointsMultiArray() let handPosePrediction = try model.prediction(poses: keypointsMultiArray) let confidence = handPosePrediction.labelProbabilities[handPosePrediction.label]! print("Confidence: \(confidence)") if confidence > 0.9 { print("Rendering hand pose effect: \(handPosePrediction.label)") renderHandPoseEffect(name: handPosePrediction.label) } } catch { fatalError("Failed to perform hand pose prediction: \(error)") } } } } func renderHandPoseEffect(name: String) { switch name { case "One": print("Rendering effect for One") if effectNode == nil { effectNode = addParticleNode(for: "One") } default: print("Removing all particle nodes") removeAllParticleNode() } } func removeAllParticleNode() { effectNode?.removeFromParentNode() effectNode = nil } func addParticleNode(for poseName: String) -> SCNNode { print("Adding particle node for pose: \(poseName)") let particleNode = SCNNode() return particleNode } } struct ContentView: View { let model = modelHand() var body: some View { ARViewContainer(arViewController: ARViewController(), model: model) } } #Preview { ContentView() }
0
0
709
Jan ’24
Need Help with Create ML in Xcode - Unexpected App Closure
Hello Apple Developer community, I hope this message finds you well. I am currently facing an issue with Create ML in Xcode, and I am seeking assistance from the knowledgeable members of this forum. Any help or guidance would be greatly appreciated. Problem Description: I am encountering an unexpected issue when attempting to create a classification model for images using Create ML in Xcode. Upon opening Create ML, the application closes unexpectedly when I choose to create a new image classification model. Steps I Have Taken: I have already tried the following steps to troubleshoot the issue: Updated Xcode and macOS to the latest versions. Restarted Xcode and my computer. Created a new sample project to isolate the issue. Despite these efforts, the problem persists. System Information: Xcode Version: 15.2 macOS Version: Sonoma 14.0 I am on a tight deadline for a project, and resolving this issue quickly is crucial. Your help is invaluable, and I thank you in advance for any support you can provide. Best regards.
1
0
868
Jan ’24
Swift Playground Bundle can't find Compiled CoreML Model (.mlmodelc)
I have been attempting to debug this for over 10 hours... I am working on implementing Apple's MobileNetV2 CoreML model into a Swift Playgrounds. I performed the following steps Compiled CoreML model in regular Xcode project Moved Compiled CoreML (MobileNetV2.mlmodelc) model to Resources folder of Swift Playground Copy Paste the model class (MobileNetV2.swift) into the Sources folder of Swift Playground Use UIImage extensions to resize and convert UIImage into CVbuffer Implement basic code to run the model. However, every time I run this, it keeps giving me this error: MobileNetV2.swift:100: Fatal error: Unexpectedly found nil while unwrapping an Optional value From the automatically generated model class function: /// URL of model assuming it was installed in the same bundle as this class class var urlOfModelInThisBundle : URL { let bundle = Bundle(for: self) return bundle.url(forResource: "MobileNetV2", withExtension:"mlmodelc")! } The model builds perfectly, this is my contentView Code: import SwiftUI struct ContentView: View { func test() -> String{ // 1. Load the image from the 'Resources' folder. let newImage = UIImage(named: "img") // 2. Resize the image to the required input dimension of the Core ML model // Method from UIImage+Extension.swift let newSize = CGSize(width: 224, height: 224) guard let resizedImage = newImage?.resizeImageTo(size: newSize) else { fatalError("⚠️ The image could not be found or resized.") } // 3. Convert the resized image to CVPixelBuffer as it is the required input // type of the Core ML model. Method from UIImage+Extension.swift guard let convertedImage = resizedImage.convertToBuffer() else { fatalError("⚠️ The image could not be converted to CVPixelBugger") } // 1. Create the ML model instance from the model class in the 'Sources' folder let mlModel = MobileNetV2() // 2. Get the prediction output guard let prediction = try? mlModel.prediction(image: convertedImage) else { fatalError("⚠️ The model could not return a prediction") } // 3. Checking the results of the prediction let mostLikelyImageCategory = prediction.classLabel let probabilityOfEachCategory = prediction.classLabelProbs var highestProbability: Double { let probabilty = probabilityOfEachCategory[mostLikelyImageCategory] ?? 0.0 let roundedProbability = (probabilty * 100).rounded(.toNearestOrEven) return roundedProbability } return("\(mostLikelyImageCategory): \(highestProbability)%") } var body: some View { VStack { let _ = print(test()) Image(systemName: "globe") .imageScale(.large) .foregroundColor(.accentColor) Text("Hello, world!") Image(uiImage: UIImage(named: "img")!) } } } Upon printing my bundle contents, I get these: ["_CodeSignature", "metadata.json", "__PlaceholderAppIcon76x76@2x~ipad.png", "Info.plist", "__PlaceholderAppIcon60x60@2x.png", "coremldata.bin", "{App Name}", "PkgInfo", "Assets.car", "embedded.mobileprovision"] Anything would help 🙏 For additional reference, here are my UIImage extensions in ExtImage.swift: //Huge thanks to @mprecke on github for these UIImage extension function. import Foundation import UIKit extension UIImage { func resizeImageTo(size: CGSize) -> UIImage? { UIGraphicsBeginImageContextWithOptions(size, false, 0.0) self.draw(in: CGRect(origin: CGPoint.zero, size: size)) let resizedImage = UIGraphicsGetImageFromCurrentImageContext()! UIGraphicsEndImageContext() return resizedImage } func convertToBuffer() -> CVPixelBuffer? { let attributes = [ kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue ] as CFDictionary var pixelBuffer: CVPixelBuffer? let status = CVPixelBufferCreate( kCFAllocatorDefault, Int(self.size.width), Int(self.size.height), kCVPixelFormatType_32ARGB, attributes, &pixelBuffer) guard (status == kCVReturnSuccess) else { return nil } CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!) let rgbColorSpace = CGColorSpaceCreateDeviceRGB() let context = CGContext( data: pixelData, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) context?.translateBy(x: 0, y: self.size.height) context?.scaleBy(x: 1.0, y: -1.0) UIGraphicsPushContext(context!) self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)) UIGraphicsPopContext() CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) return pixelBuffer } }
5
1
2k
Feb ’24
FB13516799: Training Tabular Regression ML Models on large datasets in Xcode 15 continuously "Processing"
Hi, In Xcode 14 I was able to train linear regression models with Create ML using large CSV files (I tested on about 30000 items and 5 features): However, in Xcode 15 (I tested on 15.0.1 and 15.1), the training continuously stays in the "Processing" state: When using a dataset with 900 items, everything works fine. I filed a feedback for this issue: FB13516799. Does anybody else have this issue / can reproduce it?
3
1
1k
Jan ’24
Word Tagging Model- How to change tagging unit
I created a word tagging model in CreateML and am trying to make predictions with it using the following code: let text = "$30.00 7/1/2023" let model = TaggingModel() let input = TaggingModelInput(text: text) guard let output = try? model.prediction(input: input) else { fatalError("Unexpected runtime error.") } However, the output separates "$" and "30.00" as separate tokens as well as "7", "/", "1", "/", etc. Is there any way to make sure prices and dates get grouped together and to simply separate tokens based on whitespace? Any help is appreciated!
1
0
733
Jan ’24
DataFrame's Column doesn't support array of dictionary
I'm following Apple WWDC video (https://developer.apple.com/videos/play/wwdc2021/10037/) about how to create a recommendation model. But I'm getting this error when I run the project on that like of code from their tutorial. "Column keywords has element of unsupported type Dictionary<String, Double>." Here is the block of code took from the transcript of WWDC video that cause me issue: func featuresFromMealAndKeywords(meal: String, keywords: [String]) -> [String: Double] { // Capture interactions between content (the dish keywords) and context (meal) by // adding a copy of each keyword modified to include the meal. let featureNames = keywords + keywords.map { meal + ":" + $0 } // For each keyword, create an entry in a dictionary of features with a value of 1.0. return featureNames.reduce(into: [:]) { features, name in features[name] = 1.0 } } var trainingKeywords: [[String: Double]] = [] var trainingTargets: [Double] = [] for item in userPurchasedItems { // Add in the positive example. trainingKeywords.append( featuresFromMealAndKeywords(meal: item.meal, keywords: item.keywords)) trainingTargets.append(1.0) // Add in the negative example. let negativeKeywords = allKeywords.subtracting(item.keywords) trainingKeywords.append( featuresFromMealAndKeywords(meal: item.meal, keywords: Array(negativeKeywords))) trainingTargets.append(-1.0) } // Create the training data. var trainingData = DataFrame() trainingData.append(column: Column(name: "keywords" contents: trainingKeywords)) trainingData.append(column: Column(name: "target", contents: trainingTargets)) // Create the model. let model = try MLLinearRegressor(trainingData: trainingData, targetColumn: "target") Did DataFrame implementation changed since then and doesn't support Dictionary anymore? I'm at lost right now on how to reproduce their example.
4
6
894
Jan ’24
CreateML API train soft lock on 90%
Hello, I'm trying to train a MLImageClassifier dataset using Swift using the function MLImageClassifier.train. It doesn't change the dataset size (I have the same problem with a smaller one), but when the train reaches the 9 completedUnitCount of 10, even if the CPU usage is still high, seems to happen a soft lock that doesn't never brings the model to its completion (or error). The dataset is made of jpg images, using the CreateML app doesn't appear any problem during the training. There is any known issue with CreateML training APIs about part 9 of the process? There is any information about this part of the training job? Thank you
1
0
852
Nov ’23
Getting ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset). I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully. But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax. Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND. Does anyone faced this issue? if I execute this builder.inspect_layers(last=4) I get this output [Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND) Updatable: False Input blobs: ['sequential/dense_1/MatMul'] Output blobs: ['Identity'] [Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul) Updatable: False Input blobs: ['sequential/dense/Relu'] Output blobs: ['sequential/dense_1/MatMul'] [Id: 30], Name: sequential/dense/Relu (Type: activation) Updatable: False Input blobs: ['sequential/dense/MatMul'] Output blobs: ['sequential/dense/Relu'] In the make_updatable function when I execute builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity') I get this error ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
2
0
1.1k
2w