CoreML model load failed with this error : Failed to set up decrypt context for /private/var/mobile/Containers/Data/Application/ACB94507-F8DE-494B-8499-B0CF75FC3B55/Library/Caches/temp.m/***.mlmodelc. error:-42905"

Hi there.

We use a core ML model for image processing, and because loading core ml model take long time (~10 sec), we preload core ML model when app start time.

but in some device, loading core ml model fails with such error.

we download core ML model from server then load model from local storage. loading code looks like this. typical.

MLModel.load(contentsOf: compliedUrl, configuration: config)

once this error happen, it keeps fails until we restart the device.

(+) In this article, I saw that it is related some "limitation of decrypt session" : https://developer.apple.com/forums/thread/707622 but it also happens to in-house test flight builds which are used only under 5 people.

Can I know why this happens?

Replies

When this happens, it typically means some leak in the system. If you are holding on to the MLModel that corresponds to the encrypted model, you can run out of decrypt sessions resulting in this error. Could you please check to make sure MLModel is not being leaked?

by "leak" do you mean that keeping a reference to MLModel instance? in our current use case, we prefer to keep this reference through the app life cycle, because MLModel.load() is very slow (~10 secs). do you mean that we need to load ML model just on demand and release the reference to the model, then reload (another ~ 10 sec) on demand whenever we need ML model?

Hi, Apple.

Let me double check your guide to avoid MLModel leak?

  • Can you load our ML model when out app launches and keep it being loaded during app life cycle?
  • or Should we load our ML model on-demand only when user actually use it and unload as soon as user do not use it?