CNN for classification - fake content

i guys


Just to share a concern that i personally realise an hour ago through the following article "www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html". I have to say i was surprised how easy it is force classification models to make mistakes.

In iOS 12 there is the option to have classification with most of the model available on the operating system, which makes the task of creating fake images to produce wrong classifications easier. Even if Apple is having this in mind in the trainning process to minimize this type of hack, we should all be aware of this when design apps based on deep learning in general and CNN in particular.


cheers

Manuel

Accepted Reply

Interesting article.


But, in your conclusion, why specifically care about CNN (you mean convolutional neural nets I suppose) ?

I see it more as a general robsutness issue.

- for image, the training should be done with additional noised images, to increase robustness

- for numeric data sets idem

Replies

Interesting article.


But, in your conclusion, why specifically care about CNN (you mean convolutional neural nets I suppose) ?

I see it more as a general robsutness issue.

- for image, the training should be done with additional noised images, to increase robustness

- for numeric data sets idem

CNN are usually the kings of parameters and it is quite easy to end up having overfitting issues. Besides it, Apple offers now the amazing option to train models with a small set of images with Create ML. The result model shares most of the layers with other model trained through the same process.

Producing images to "hack" models can become an interesting task. It can easily scale within Apple ecosystem and we should all be aware of this possibility. Note that not a long time ago there were some camera brands sued because they used face tracking systems that did not work well with people with dark skin, which was a well known technical shortcoming...