CoreML Inference Error: "Could not create Espresso context"

Hello everybody,

I am trying to run inference on a CoreML Model created by me using CreateML. I am following the sample code provided by Apple on the CoreML documentation page and every time I try to classify an image I get this error: "Could not create Espresso context".

Has this ever happened to anyone? How did you solve it?

Here is my code:

Code Block
import Foundation
import Vision
import UIKit
import ImageIO
final class ButterflyClassification {
    
    var classificationResult: Result?
    
    lazy var classificationRequest: VNCoreMLRequest = {
        
        do {
            let model = try VNCoreMLModel(for: ButterfliesModel_1(configuration: MLModelConfiguration()).model)
            
            return VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
                
                self?.processClassification(for: request, error: error)
            })
        }
        catch {
            fatalError("Failed to lead model.")
        }
    }()
    func processClassification(for request: VNRequest, error: Error?) {
        
        DispatchQueue.main.async {
            
            guard let results = request.results else {
                print("Unable to classify image.")
                return
            }
            
            let classifications = results as! [VNClassificationObservation]
            
            if classifications.isEmpty {
                
                print("No classification was provided.")
                return
            }
            else {
                
                let firstClassification = classifications[0]
                self.classificationResult = Result(speciesName: firstClassification.identifier, confidence: Double(firstClassification.confidence))
            }
        }
    }
    func classifyButterfly(image: UIImage) -> Result? {
        
        guard let ciImage = CIImage(image: image) else {
            fatalError("Unable to create ciImage")
        }
        
        DispatchQueue.global(qos: .userInitiated).async {
            
            let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])
            do {
                try handler.perform([self.classificationRequest])
            }
            catch {
                print("Failed to perform classification.\n\(error.localizedDescription)")
            }
        }
        
        return classificationResult
    }
}


Thank you for your help!
Answered by ashinthetray in 679086022

Hi there,

I ran into the same error on my M1 Mac. I ran it on an Intel Mac and had no issues.

This is a M1 specific bug (while using simulator), please file a bug with Apple.

Did you ever find a solution? I just got the same by feeding an image to VNImageRequestHandler to or rather an VNGenerateAttentionBasedSaliencyImageRequest. I am on Simulator 14.5.

Accepted Answer

Hi there,

I ran into the same error on my M1 Mac. I ran it on an Intel Mac and had no issues.

This is a M1 specific bug (while using simulator), please file a bug with Apple.

Any news?

Still having this same issue on M1 mac.

Having the issue with M1.

Still having issues and on M1. When using a model posted on apple's website I have no issues. When using my model I have no issues on a physical device or the arm64 simulator. I only have issues when using my model in the simulator on an iPhone.

CoreML Inference Error: "Could not create Espresso context"
 
 
Q