Style transfer models

I’m a visual artist / developer who uses style transfer as one of the important tools in my visual tool kit. After a quick test on Catalina I love the speed and ease of use that Create ML provides. And want to try out the create ML style transfer. And the idea of doing some programming to ramp up output and extend the search for visual results is a nice additional idea. I want to get started.

Background: I am using a much loved older Macintosh laptop without significant graphic acceleration as my daily driver. And want to start using these tools during the beta period before Apple Silicon is released to the general public.

Question: Until the Apple Silicon devices are released, what do folks recommend in terms of hardware to work with these tools. At this point I have to do this on a personal budget with a reasonable dollar outlay over time.

I’ve considered:
  • Requesting the Mac Mini with Apple Silicon, and then look at options when official hardware becomes available.

  • Getting a regular Intel Mac Mini and adding an eGPU if CPU does not cut it.

  • Getting current 16” i7 MacBook Pro and up specking the built in GPU.

  • Giving up on Mac OS and specking a gaming Laptop with Nvidia GPU and Linux or Windows and using python and fast.ai

  • Doing work in the cloud. ( I like this the least because I can’t easily control dollar outlay.)

Unknowns:
  • I’m not clear how much horse power the visual style transfer models actually take to run. I’ve seen some discussions on moving model training to Nvidia in some documentation. Which sounds like out of the Apple ecosystem. So I’m confused a bit. At WWDC I saw Macs doing style transfer however no specs were mentioned.

What hardware would you choices to create art with Style Transfer.

—Tom



Any help would be appreciated.

Thanks.
If I was a visual artist, I'd skip Create ML and learn to train my own style transfer models using TensorFlow or PyTorch. Create ML is fun for simple stuff, but you'll quickly run into its limitations and there is no way to work around those.

For example, the Create ML style transfer models look like they're limited to 512x512 images (perhaps that's just the one for video, I didn't look closely).

If you don't already have a Linux computer with a nice GPU lying around, that means doing work in the cloud (which is often free for a number of hours).

You can use any machine that supports macOS Big Sur and update it to the latest macOS to try the new style transfer template with the CreateML app.

The training time depends on the number of iterations. For the default training settings, new MacBooks will complete training in approximately 5-10 minutes. Whereas, previous-generation MacBooks will take a bit longer than that.

You can configure a few training parameters such as style strength which determines how much of your style will be applied to the output by the learned model and style density which determines how coarse or fine the elements of the style block will be.

Most importantly, you don't need to wait until 500 iterations. Every 5 iterations your validation image is stylized and you can take a model snapshot and use that as your model in your app.

It is encouraged to take multiple snapshots throughout the training process so that you can compare how those snapshots perform on multiple test images in the preview tab.

Moreover, you can continue training for more iterations until the results start to fit your visual needs.
Style transfer models
 
 
Q