I am interested in creating a project on iOS that utilizes CoreML and Vision, and I would really appreciate some advice on getting started. My main concern is actually creating a machine learning model. Ideally, I would like to use a tool that would take a substantial amount of photos of a dog, for example, and then would allow me to label them as such. Ideally, CoreML would be able to use this data to recognize other dogs in a given photo. For example, it'd be great to supply photos of a tree, for example, and have an application distinguish between a tree and a dog. Is there any tool like this that exists?
Though this is seemingly a very simple example, is it possible to easily create a machine learning model for CoreML that would achieve this behavior? I understand that this may be a huge oversimplification of CoreML, machine learning tools, and how one may use them, but I am really just trying to get started with machine learning. Thanks in advance!