For this assignment, I used RunwayML's image training model on pictures of my face. I've seen a lot of GAN art made seemingly using smaller datasets, and I've always been interested in exploring it more myself. I figured since I don't know how to scrape, and I don't have a giant folder of pictures of my face at the ready, I would take a video of myself and convert it to pictures. And I did. However, because I wanted to mimic (to an extent) a more "proper" dataset, and at least see if I could get away with using a video instead of different images, I took a timelapse, and moved around a lot and made different faces to give the machine some variation. To be fair, I don't believe it is better to do it this way, I only wanted to see what the results would be and eventually compare with different methods, like actually collecting a larger dataset, or even more variation, or less, or duplicating and flipping my data, etc.

Results:

ezgif.com-gif-maker.gif

Screen Shot 2021-11-09 at 8.00.39 PM.png