There is a lot of noise and hand-wringing about AI in general right now: it’s going to steal jobs, it’s going to be smarter than us, it’s going to kill us all, etc. There are especially loud discussions around generative AI – neural nets that can transform the style of a given image or turn text into new images, or any number of similar “artistic” capabilities. Everyone has hopefully by now seen the stories of AI winning an art competition, which shocked many but was inevitable if you were paying attention to the rapid progress these tools are making. So I want to share a personal experience I just had with this kind of AI, because it was surprising to me, and hopefully adds some positive news to the intense worry that almost everyone seems to have about all this.
I had been idly checking out Midjourney, Craiyon, Dream, Dall-e, and various Stable Diffusion services over the past few months, but it was more playing than anything. I got some cool images out of it, made some joke images, and got the vague sense that it would be useful for mood boards, decks, and other initial creative work. But I moved on without any real plan to use it for anything more than personal amusement.
Then I started working on a VR180 short film that I paused ~3 years ago. One of the unfinished pieces of the film was the end credits, which needs a 90’s cartoon look and feel, and it features an animated pig. The main challenge was that I can do a lot with computers, but I am terrible at drawing. I was struggling to figure out what to do about this title sequence when it struck me that maybe I could get some sort of baseline out of these AI tools, something I could maybe work on top of or at least be inspired by.
I logged into Dall-e and typed in “90’s cartoon animated pig wearing a tophat smiling and waving.” I expected something like this (a result I got later from Craiyon):
But what I got was THIS:
Mind. Blown. It was exactly what I was hoping for, right off the bat – with the one exception that the image was clipped, I would have to draw in the top of his hat and the rest of his legs. But OpenAI had just released their new outpainting feature, which hallucinates details beyond the borders of an image, so I decided to see if it was any good:
Uh, yes, it’s good. So I had one pig image that I could clean up in Illustrator and bring into After Effects for puppet warp animation, but I really needed more than one pig image. After a few hours of trying different prompts and using the outpainting tool to generate missing details, I had 5 relatively well-matched pig images ready to rock:
Then I tried similar prompts in Craiyon and Stable Diffusion. While neither of them got a usable pig, they did provide great inspiration for backgrounds and graphical elements for the sequence as a whole:
I then animated the titles using the pigs from Dall-e and the inspiration from Stable Diffusion and I created something that I’m truly proud of. The animated titles go into a 3D scene I made in Blender, which needed some artwork on the walls. 30 minutes of work with Dall-e later, and I had paintings to go in the (very dark) background that matched the style and themes I wanted:
And finally, a frame of the finished piece (this is an equirectangular 360º x 180º image – if you’re not used to looking at this kind of thing, it’s like a flat map designed to wrap onto a spherical globe):
So what did I learn from all of this? Are artists doomed? Did I help put anyone out of business? No, and no. These are tools, and just like every tool they cannot replace their user nor do they possess any intelligence or intention. They are only as good as the human using them, and while they can surprise and inspire they cannot independently create anything. An AI cannot decide what is a good idea, nor can they create anything completely from scratch – they mutate some input based on how they were trained, so really they are just the ultimate mashup machine. Despite the hand-wringing going on all around the world over this stuff, we are not about to have the creation of all art and entertainment creativity ripped from our hands.
As a creative, I am madly in love with the potential here – this new ability to work with an external “intelligence” to collaboratively generate imagery that you can further edit and build upon using other tools is so exciting and empowering. I am especially excited by the tools that are now in development to put Stable Diffusion inside Photoshop, Blender, and even Figma, which I would absolutely have used for this project.
I feel like I had a similar experience with these new generative AIs as the author of this article: “Will DALL-E the AI Artist Take My Job? This statement especially resonated with me:
As I refined my techniques, the process began to feel shockingly collaborative; I was working with DALL-E rather than using it. DALL-E would show me its work, and I’d adjust my prompt until I was satisfied. No matter how specific I was with my language, I was often surprised by the results — DALL-E had final say in what it generated.
Megan Paetzhold
I for one welcome our new AI overlords – this has already revolutionized my personal creative process, and I’m so excited for what will come next as the tools improve and I find better ways to work with them.