I have been attempting to debug this for over 10 hours...
I am working on implementing Apple's MobileNetV2 CoreML model into a Swift Playgrounds. I performed the following steps
- Compiled CoreML model in regular Xcode project
- Moved Compiled CoreML (MobileNetV2.mlmodelc) model to Resources folder of Swift Playground
- Copy Paste the model class (MobileNetV2.swift) into the Sources folder of Swift Playground
- Use UIImage extensions to resize and convert UIImage into CVbuffer
- Implement basic code to run the model.
However, every time I run this, it keeps giving me this error:
MobileNetV2.swift:100: Fatal error: Unexpectedly found nil while unwrapping an Optional value
From the automatically generated model class function:
/// URL of model assuming it was installed in the same bundle as this class
class var urlOfModelInThisBundle : URL {
let bundle = Bundle(for: self)
return bundle.url(forResource: "MobileNetV2", withExtension:"mlmodelc")!
}
The model builds perfectly, this is my contentView Code:
import SwiftUI
struct ContentView: View {
func test() -> String{
// 1. Load the image from the 'Resources' folder.
let newImage = UIImage(named: "img")
// 2. Resize the image to the required input dimension of the Core ML model
// Method from UIImage+Extension.swift
let newSize = CGSize(width: 224, height: 224)
guard let resizedImage = newImage?.resizeImageTo(size: newSize) else {
fatalError("⚠️ The image could not be found or resized.")
}
// 3. Convert the resized image to CVPixelBuffer as it is the required input
// type of the Core ML model. Method from UIImage+Extension.swift
guard let convertedImage = resizedImage.convertToBuffer() else {
fatalError("⚠️ The image could not be converted to CVPixelBugger")
}
// 1. Create the ML model instance from the model class in the 'Sources' folder
let mlModel = MobileNetV2()
// 2. Get the prediction output
guard let prediction = try? mlModel.prediction(image: convertedImage) else {
fatalError("⚠️ The model could not return a prediction")
}
// 3. Checking the results of the prediction
let mostLikelyImageCategory = prediction.classLabel
let probabilityOfEachCategory = prediction.classLabelProbs
var highestProbability: Double {
let probabilty = probabilityOfEachCategory[mostLikelyImageCategory] ?? 0.0
let roundedProbability = (probabilty * 100).rounded(.toNearestOrEven)
return roundedProbability
}
return("\(mostLikelyImageCategory): \(highestProbability)%")
}
var body: some View {
VStack {
let _ = print(test())
Image(systemName: "globe")
.imageScale(.large)
.foregroundColor(.accentColor)
Text("Hello, world!")
Image(uiImage: UIImage(named: "img")!)
}
}
}
Upon printing my bundle contents, I get these:
["_CodeSignature", "metadata.json", "__PlaceholderAppIcon76x76@2x~ipad.png", "Info.plist", "__PlaceholderAppIcon60x60@2x.png", "coremldata.bin", "{App Name}", "PkgInfo", "Assets.car", "embedded.mobileprovision"]
Anything would help 🙏
For additional reference, here are my UIImage extensions in ExtImage.swift:
//Huge thanks to @mprecke on github for these UIImage extension function.
import Foundation
import UIKit
extension UIImage {
func resizeImageTo(size: CGSize) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
self.draw(in: CGRect(origin: CGPoint.zero, size: size))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return resizedImage
}
func convertToBuffer() -> CVPixelBuffer? {
let attributes = [
kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue
] as CFDictionary
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(
kCFAllocatorDefault, Int(self.size.width),
Int(self.size.height),
kCVPixelFormatType_32ARGB,
attributes,
&pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(
data: pixelData,
width: Int(self.size.width),
height: Int(self.size.height),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: self.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
}
I believe I finally found an overall solution that works for both compiled (.mlmodelc
) and uncompiled models (.mlmodel
, .mlpackage
), plus an explanation of why .mlmodelc
files may not be correctly copied in the bundle during the build process (even though it succeeds).
I also sent a similar answer to this other thread, and included a few more details about an issue unique to Xcode and uncompiled models.
You will need to edit the Package.swift
file manually. Go to the playground in Finder, control-click it and select Show Package Contents. Then, open the Package.swift
file. At the bottom of this file, you'll find the following target
section, which might look different for you:
targets: [
.executableTarget(
name: "AppModule",
path: ".",
resources: [
.process("Resources") // copy resources to the app bundle without keeping folder structure
]
)
]
The .process("Resources")
line indicates that the resources (in your case, your model) must be copied without keeping the original folder structure. Say you have 2 folders inside the Resources folder, and in each of these you have a few files. Only the files inside will be copied in the bundle, and the 2 original folders won't appear.
You will need to replace (or add, if you don't have the process() function) process()
with copy()
, which maintains the internal folder structure (I explained below why). You can either use .copy("Resources")
, which require you to change the model's bundle URL like this:
class var urlOfModelInThisBundle : URL {
let bundle = Bundle(for: self)
return bundle.url(forResource: "Resources/MobileNetV2", withExtension:"mlmodelc")!
}
Or you might directly copy the model folder to the Bundle using .copy("Resources/MobileNetV2")
and keep the original URL - note that this way, any other files in the Resources might not be available unless you add other process / copy functions to your Package.swift
file.
For more information about process()
and copy()
, check the docs for bundling resources with Swift Packages (since a Swift Playground is a special type of a Swift Package).
Why compiled models are not correctly copied in the bundle using process()
Compiled ML models are essentially a folder that contains one or more coremldata.bin
files (and maybe other files as well) - in fact, they appear as a folder on the filesystem (you can cd
into them in Terminal). The reason why these .mlmodelc
aren't correctly copied to your bundle when using .process("Resources")
, or may give a build error with certain models that contain more coremldata.bin
files, is that the internal folder structure is not copied, but only the files inside it. That's why you found that your bundle contained a coremldata.bin
file only.
Using the previously mentioned .copy()
method, you can keep the internal folder structure.
Regarding why uncompiled models cannot be built in Xcode, something similar might be happening, but these do not appear as folders on the filesystem - so this should probably not happen and perhaps it is a build issue. It also seems that .mlmodels
are not compiled at runtime, which I believe used to happen in the previous Swift Playgrounds / Xcode versions from last year.
Let me know if this works!