Posts

Post not yet marked as solved
3 Replies
When this happens, it typically means some leak in the system. If you are holding on to the MLModel that corresponds to the encrypted model, you can run out of decrypt sessions resulting in this error. Could you please check to make sure MLModel is not being leaked?
Post marked as solved
2 Replies
Sorry about the inconvenience. This issue has been addressed and a fix should be available for verification in the next iOS beta release. Off device compilation is definitely a recommended path and most of the apps use this approach. In fact, Xcode does this if you include a model in your project. You can also simply use coremlcompiler command line tool from Xcode's toolchain to compile your model off device. xcrun coremlcompiler compile </path/to/mlmodel/or/mlpackage> </path/to/destination/directory>
Post marked as solved
2 Replies
Sorry about the disruption. We root-caused the issue and addressed it and there are guardrails in place to ensure this does not repeat. Does that occur more often? It means that during such times, new customers of my app will not be able to use the app. No, this is the first time since the feature went public more than 2 years back and we are taking steps to make sure this won't happen. Anyone has some sample code on how to identify this situation / extract the status code from the error object? (Instead of error 'You need an internet connection' I would like to issue error 'Apple server currently not available - try again later') The returned error should be MLModelErrorModelDecryptionKeyFetch. Yes, this would be the error even when the device is offline. Anyone knows how to force/simulate this situation so that I can analyse the error object and see how I can identify the situation? There's no clear way to simulate / force this scenario.
Post not yet marked as solved
1 Replies
Hello, this seems like a genuine bug in CoreML / MetalPerformanceShaders. Do you mind filing a bug report on http://feedbackassistant.apple.com/ with a sysdiagnose from the device after reproducing the issue + the error message you posted above?
Post not yet marked as solved
2 Replies
No, CoreML can only models that are on device - either bundled with your app or downloaded OTA and stored on device. Let us know your use case (if you have specifics in mind) and we can see how to support that!
Post not yet marked as solved
1 Replies
CoreML on device training does not support multiple loss functions. Please file a feature request on feedbackassistant.apple.com.
Post not yet marked as solved
1 Replies
Depending on the compute unit preference specified at the model load time (MLModelConfiguration.computeUnits), CoreML performs additional optimisations at the model load time - which explains the difference in load time. Note that many of these optimisations happen only at the first model load and should not impact subsequent loads. Please file a bug on feedbackassistant.apple.com if you observe otherwise.
Post not yet marked as solved
1 Replies
Hello, this is a valid observation. Support for range flexibility with Neural Engine does have a few limitations. Do you mind filing a bug report on feedbackassistant.apple.com? One way to work around this issue is using enumerated flexibility. This allows you to enumerate all shapes ahead of time so, CoreML can prepare the model for inference on Neural Engine for all those shapes ahead of time. Is that possible for your use case?
Post not yet marked as solved
3 Replies
Command CoreMLModelCodegen failed with a nonzero exit code That explains the issue you are facing but, it is not expected. Could you share a sample project on feedbackassistant.apple.com ? We can take a quick look and get back to you here.
Post not yet marked as solved
1 Replies
Thank you for the post. I saw a feature request come through feedbackassistant.apple.com and it seems to be related to this. Thanks for the feature request - we will look into this use case.
Post marked as solved
2 Replies
This is expected. We limit the number of active decrypt sessions for every app. The issue is that your model is being retained in the autorelease pool and that is holding on to an active decrypt session. If you move the @autoreleasepool { to after while (keepRunning), you should be able to resolve this issue.
Post not yet marked as solved
1 Replies
Replied In CoreML problem
We should be able to take a closer look and help. Could you please file a bug report on feedbackassistant.apple.com along with TabRegLoanModel.mlmodel?
Post not yet marked as solved
1 Replies
CoreML supports image input / output. You can feed CVPixelBuffer via CoreML API and look up prediction. Here's a good starting point for reading CVPixelBuffer from videos: https://developer.apple.com/documentation/accelerate/reading_from_and_writing_to_core_video_pixel_buffers. For this, you need to first build a model that takes image and produces a boolean to indicate if there's text in the image. For this, I don't have any specific pointer but, you might be able to find some existing model architectures / trained models that can do this. Once you have a model, you can use CoreML tools to convert that into CoreML format: https://coremltools.readme.io.
Post marked as solved
2 Replies
Hello, could you please file a bug report on feedbackassistant.apple.com along with the code / model you are trying to debug so one of the Apple engineers can take a look? I would also recommend trying this on a real device if you can. There are certain limitations on the simulator.