We're trying to bring the prediction models in-app using the JSVirtualMachine. The AI used in our modelling can't make use of iOS Core ML as it currently stands.
Everything was working fine until we hit a memory limit with a model that weighs in at a whopping 350Mb in size.
This Java file is mostly data objects (arrays, some nested).
That huge file works on a 2020 iPad Pro (which has 6G of memory) but crashes all the other devices we have (iPod Touch 7th Gen, iPad Mini2, iPhone X, iPhone8Plus).
Is there any documented limitations that we can reference for memory limitations using the JSVirtualMachine?
We're looking at splitting up larger files and running the predictor not as a single file but as a sequence of multiple files with the results passed down the line.
Any other suggestions welcome!
The App is written in ObjC and this didn't work initially until adding @autoreleasepool {...} around the code loading the json from file to iOS data to JS.
Code Block for (NSString *file in dataFiles) { if ([[[file pathExtension] lowercaseString] isEqualToString:@"json"] == NO) { continue; } NSString *variableName = [file stringByDeletingPathExtension]; @autoreleasepool { self.jsContext[variableName] = [FileUtils loadJSONFile:[dataFilesPath stringByAppendingPathComponent:file]]; } }
The JS code itself benefited from using var instead of const which made a significant difference to memory usage.
Load time is slow on an iPad Mini 2 at around ~76 seconds but if the context is preserved execution time is only a few seconds on each analysis run. iPad Pro loads in 14s which isn't far from what we got when loading the single file JS model.
So we appear to have a very acceptable solution.