I started to play around with RunwayML recently, and one of the most compelling features is the ability to train your own models. You essentially start with a pre-trained styleGAN model and then upload your own dataset and then set the number of steps the algorithm will take to train your model.
This model is trained on a little over 2,500 photos of pigeons. The quickest way I found to – say – harvest the sample material was to head over to flickr.com and do an image search. In order to download the large number of images, I left Charles Proxy running in the background, and then saved the session to my desktop. Since the session comes right from a CDN all the images are nested in various folders, so to flatten them into one to prepare for upload you can simply run a command such as: find ~/Desktop/MYFOLDER -mindepth 2 -type f -exec mv -i '{}' ~/Desktop/MYFOLDER ';'
The results aren’t that great and are based on two attempts of using StyleGAN 1 at 7500 steps and styleGAN 2 at 3000 steps. Nevertheless, it was an interesting introduction to playing with RunwayML.