i guys
Just to share a concern that i personally realise an hour ago through the following article "www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html". I have to say i was surprised how easy it is force classification models to make mistakes.
In iOS 12 there is the option to have classification with most of the model available on the operating system, which makes the task of creating fake images to produce wrong classifications easier. Even if Apple is having this in mind in the trainning process to minimize this type of hack, we should all be aware of this when design apps based on deep learning in general and CNN in particular.
cheers
Manuel