Did anyone try CoreML model conversions for ML models other than Image recognition?

Hi, Did anyone try CoreML model conversion for models other than image and number recognition. Per say, R-CNN or Image Segmentation. I am facing a lot of difficulties in converting those type of models from my existing code base to apple supported format. Also, it is getting very difficult to convert pure TensorFlow model to Kera 1.2.2 version. Is there any idea when Apple will be supporting the new the Keras version and also core TensorFlow?

Replies

We are working hard to get Keras 2.0 support. Stay tuned for an update soon. If you have specific issues with models, please do provide us with details. Thanks for trying out the beta.

Looking forward to the day all this guff is out of date due to upgrades to CoreML to support tensorflow out of the box.


there is thishttps://github.com/xmartlabs/bender

+ https://github.com/xmartlabs/benderthon

I spent a couple of days grooming this code - but as you asserted a lot of this code is more focused on image inference.

It seems their operations strip down/ bend tensorflow operations down to bare minimum.


the latest tensorflow code has cocoapod support. > 450 mb - this still has a subset of operations -

but it would more easily support any advance graph operations.unfortunately - their samples are (currently) all using obj-c++

to get it onto mobile - you have to freeze the graph.


I had a crack at porting the server side golang library to swift (see c api wrapper) + also had a crack at programatically generating operations file in swift using stencil kit. https://github.com/johndpope/tensorflow/tree/swift/tensorflow/swift

but abandoned as I found out they don't allow training in any language except python

https://github.com/tensorflow/tensorflow/issues/19


I'm wondering if a (programatically generated) c++ -> c -> swift wrapper around tensorflow would bear more fruit.


the key to getting RNN working is freezing graph / and optimizing for inference.

https://github.com/xmartlabs/benderthon/issues/3


from here - load the pb. and throw it at tensorflow tf_session->Run.

https://gist.github.com/johndpope/39952c9b15f6c39c535e58b638e6f639

if (tf_session.get()) {

std::vector<tensorflow::Tensor> outputs;

tensorflow::Status run_status = tf_session->Run(

{{input_layer_name, image_tensor}}, {output_layer_name}, {}, &outputs);

if (!run_status.ok()) {

LOG(ERROR) << "Running model failed:" << run_status;

} else {

tensorflow::Tensor *output = &outputs[0];

auto predictions = output->flat<float>();

}

Thank you for your response @srikris. I tried converting Faster-RCNN from already available caffe models to CoreML models. But no luck. Here is what I have done so far: (Source for all the links below: https://github.com/rbgirshick/py-faster-rcnn/tree/master/models)

  1. I downloaded the models' caffemodel files from COCO (https://dl.dropboxusercontent.com/s/cotx0y81zvbbhnt/coco_vgg16_faster_rcnn_final.caffemodel?dl=0) and PascalVOC (https://dl.dropboxusercontent.com/s/o6ii098bu51d139/faster_rcnn_models.tgz?dl=0).
  2. I moved the models to respective folders in the repo's directory ./models/ .
  3. I initialised my conda environment which is perfectly working for the caffe models provided on http://pythonhosted.org/coremltools/
  4. I ran the following script
import coremltools
coreml_model = coremltools.converters.caffe.convert(('ZF_faster_rcnn_final.caffemodel', 'test.prototxt'))


And this is the output on the terminal:

```

[libprotobuf ERROR /git/coreml/deps/protobuf/src/google/protobuf/text_format.cc:298] Error parsing text-format caffe.NetParameter: 290:21: Message type "caffe.LayerParameter" has no field named "roi_pooling_param".

Traceback (most recent call last):

File "coreml.py", line 19, in <module>

('ZF_faster_rcnn_final.caffemodel', 'test.prototxt'))

File "/

predicted_feature_name)

File "/

predicted_feature_name

RuntimeError: Unable to load caffe network Prototxt file: test.prototxt

```


In this particular case, it happens to be that roi_pooling_layer is added in newer version of Caffe, but when I am trying to convert other caffe models for bounding box detection, I am getting some other errors all related to not loading .prototxt files. I would be more flexible designing any kind of networks for solving specific task using TensorFlow and Keras 2.0 is easier to adapt for converting the layers. But it will be great, if your team can create a .mlmodel for any of the R-CNN (or) Fast R-CNN (or) Faster R-CNN (or) SSD (or) MobileNet SSD networks. 🙂

The mlmodel format supports generic neural networks, i.e. NNs that are not classifiers. I'm currently porting YOLO to Core ML, for example. This does not output a classification result (a probability distribution) but multi-dimensional array. The app will then process this multi-dimensional array further (on the CPU).

hi, I have failed to try FCN model using Core ML because if the model contained a Deconvolution layer it goes wrong with error:


Error Domain=com.apple.CoreML Code=0 "Error computing NN outputs." UserInfo={NSLocalizedDescription=Error computing NN outputs.}

if I delete the Deconvolution layer in deploy.prototxt it works fine.

How can I use Core ML framework with FCN model containing a Deconvolution layer?

Care to share more about your port of YOLO to CoreML?

Did you get this working?

Hi, I have been working on the object detection pipeline and finally achieved some decent results on iPhone 7 using CoreML. I have currently implemented Tiny YOLO v1 by converting already available pretrained weights from DarkNet into CoreML model.


You can find the code here - https://github.com/r4ghu/iOS-CoreML-Yolo .

I also documented the challenges I faced during the conversion and the link to that blog is - https://sriraghu.com/2017/07/12/computer-vision-in-ios-object-detection/


Hope you will find it useful. 🙂

All, our iDetection app runs YOLOv3-SPP at 30 FPS on A12 iOS devices. It is free to download here:

https://itunes.apple.com/app/id1452689527


The project is here:

https://github.com/ultralytics/yolov3