Lots of popular web apps (e.g. deepdreamgenerator.com) have support for Style Transfer that allows for the user to upload an image to be used as the "input style".
Based on the current Style Transfer model creation flow with Create ML, it seems that you can only train a model based on a specific input style. A model that accepts arbitrary style inputs don't seem possible from the Core ML interface.
Is there a way to do it?
Maybe I would I just need to download and convert the deep-dream-style-transfer model that accepts any style into a CoreML model?
Based on the current Style Transfer model creation flow with Create ML, it seems that you can only train a model based on a specific input style. A model that accepts arbitrary style inputs don't seem possible from the Core ML interface.
Is there a way to do it?
Maybe I would I just need to download and convert the deep-dream-style-transfer model that accepts any style into a CoreML model?