Does ios dockkit in WWDC2023 conference have code for APP demo like the custom camera app shown in the video.
Post
Replies
Boosts
Views
Activity
When I used xcode to generate the model encryption key, an error was reported, the error was 'Failed to Generate Encryption Key and Sign in with your iCloud account in System Preferences and Retry '. It was normal a month or two ago, but now it suddenly doesn't work well, and the already encrypted model can't be decrypted. iCloud is normal, logged back in, and other Xcode Teams have tried, but they are not working
Xcode version 14.2
macOS Monterey 12.6.2
1、The original pytorch model is converted to coreml .mlpackage, using ALL backend you can see the operator running on ANE, but I intercepted part of the model and then use ALL backend, the operator can only run on cpu+gpu, I just intercepted part of the original model.
2、And when using coremltools conversion set minimum_deployment_target=ct.target.iOS15, using ALL the backend actually runs on ANE, but when minimum_deployment_target=ct.target.iOS16, using ALL the backend actually runs on cpu+gpu
coreml uses xcrun command to compile encrypted model, ios15 device reports error and abnormal result
coreml uses xcrun coremlcompiler compile parsing.mlpackage . --encrypt parsing.mlmodelkey to compile and encrypt the .mlpackge model to generate .mlmodelc, which is then fed into the project for use. It works fine on ios16.0 devices and the result is correct, but on ios15 devices LoadModel reports an error: [espresso] [Espresso::handle_ex_plan] exception=ANECF error: _ANEEspressoIRTranslator : error at unknown location: File at path "/var/mobile/Library/Caches/com.apple.aned/clones/6BBA6DAB4BAF638CB557CACD5A878D143CE22E4F9896A3A341F24A3787910D71/parsing.mlmodelc/model.mil" cannot be read.,and the result is slightly different from the non-encrypted model, which does not exist in ios15
Complete steps: I didn't write any code like "MLModelCollection", just opened the model with Xcode, generated the encrypted ".mlmodelkey", and then generated the ".mlarchive"。Then, I opened https://ml.developer.apple.com/core-ml/ with Safari, clicked on "+", entered "test" for Model Collection ID; entered "parsing" for Model IDs; clicked on "Create"; clicked on "+"; and entered "test" for Model Collection IDs. "; enter "parsing" for Model IDs; click "Create"; click "+". Enter "test" at Deployment ID; upload .mlarchive file at Choose file; click "Deploy" in the upper right corner; the upper left corner shows red " Load failed!" in the left corner; and I have tried the non-encrypted model, and the problem exists with both personal and corporate accounts. What is the problem? I see a lot of feedback from developers.
How to get the name and tensor of the input and output of the coreml model automatically by objeck-c code
The Macos side uses coremltools to convert the model from pytorch to coreml mlpackage, and when converting, select convert_to="mlprogram" and minimum_deployment_target=ct. target.iOS16 and compute_units=ct. The output model is fp16. When reasoning, we found that the CPU and GPU results are very close and correct, but the results are seriously abnormal when using the NPU backend, the phone version is ios16.2. and the model arithmetic is running on the NPU side (the reasoning time is shorter)
My transformer model sees no operator running on ANE through Performance, I want to know what's wrong with it and what's the solution. ps: the original model is pytorch
MLMultiArray how to write to text txt as String fast
How can I get the exact time spent on each layer of the coreml model, is there an API like tensorflow?
The data format of MLMultiArray content is float32. How to convert the data format of content to int