Encrypted CoreML model load fails with vague error

Xcode Version: 12.4 (Used Xcode 12.5 to create encryption key as 12.4 is bugged and will not let you)

I am bundling a ML model into my app, and have encrypted ML model via Xcode 12 encryption method shown here. Model loads and works fine without encryption. After adding model encryption key and setting it as compiler flag, the model encryption/decryption works fine in a demo app, but using the same load method in my main app fails with this error message:

Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Unable to load model at /path_to_model/my_model.mlmodelc/ with error: failed to invoke mremap_encrypted with result = -1, error = 12"

I have tried all variations of the load method available within my swift generated model file, all produce the same result above.

Replies

Hi, what is the difference between the demo app and the main app in this case? Could you please check if the main app is code signed?

  • Hi, it appears that the issue has to do with how the model is bundled. The encrypted model loads fine from an app bundle, but not from a framework bundle. Can you comment on that? While placing models into the app bundle is possible workaround, we still want to bundle our models into the framework.

Add a Comment

Do you by any chance bundle the model directly in your demo app, but bundle it into a separate framework in your main app?

We have the same error specifically when we place our encrypted models into a framework bundle and let the framework load them. The problem appeared in iOS 14.5 and is currently present in 14.6 as well. If we move the models out from the framework bundle, and let the framework load them from the app bundle, it works fine. Everything also works with non-encrypted models.

I've checked our app, and the framework is set to "Embed & Sign" in the project settings. OurFramework.framework/_CodeSignature/CodeResources file seems to be generated correctly too. Yet we still get the error.

  • Yes, it appears to be linked to this. I bundled the model into my main app target rather than the framework, using a protocol to pass the loaded model from the main app target to the framework. This worked for me, and the encrypted model successfully loaded. However, I now face a new issue where the unencrypted model loaded in this way works fine, but the decrypted model cannot run any predictions,

    and fails with:

    The VNCoreMLTransform request failed [coreml] Error in declaring input -6. [coreml] inputLayer: input_1 <- my input layer [coreml] Failure in resetSizes.

Add a Comment

I'm facing the same issue. When the model is bundled in a framework it fails to load. Any help is appreciated!

I had the same issue with a model in a .bundle folder. I found a workaround for it. It should also work for frameworks.

Instead of using

Bundle(identifier: "com.company.bundle").url(forResource: "model", withExtension: "mlmodelc")

you can use

Bundle.main.url(forResource: "model", withExtension: "mlmodelc", subdirectory: "bundle.bundle")

This is definitely hacky but it works in my case.