The goal of this project was to explore making art with GANs using RunwayML. Coming into this project, I decided to use Runway because I didn't feel comfortable taking on something lower level, and I wanted to see the capabilities of something as accessible as Runway. At the same time, I didn't want to just "CREATE IMPOSSIBLE VIDEO"–as is displayed on the homepage of Runway. I didn't want to just use Runway to essentially apply a flashy video effect that feels machine learn-y and call it a day. Limited by my introductory knowledge in using GANs, I stuck with Runway, but attempted to stretch and "break" the tool as much as I could.

The concept behind this project was essentially to repeatedly train StyleGAN2 with consecutively more (and varying) data, and then render enough latent space walks such that I could also do a "walk" along the numerous models. My hypothesis was that it would gradually show less variety and more specific data of my face, which was the subject.

The input training data was videos of my face, roughly 40s long, converted to jpgs. I trained the models in Runway, downloaded some random vectors from it, and then hosted all of my models and rendered the output videos in processing:

Vectors, which I put into a random order in an array:

Screen Shot 2021-12-15 at 12.23.42 PM.png

Some of the code (edited from one of Dan's latent walk demos) showing how I lerped from vector to vector and through the array ("%vectors.length" ensures that the output would loop)

Screen Shot 2021-12-15 at 12.23.59 PM.png

My plan:

untitled (6).png

First data set training yield (step 500):

Screen Shot 2021-11-09 at 8.00.39 PM.png

Example input video:

VID_20211201_165516.mp4

Experimenting with truncation: