Artists amaze with AI-generated film stills from a parallel universe

Artists amaze with AI-generated film stills from a parallel universe

Since last year, a group of artists have been using an artificial intelligence image generator called Midjourney to create still photos of films that don’t exist. They call the trend “AI cinema.” We spoke to one of her practitioners, Julie Wieland, and asked her about her synthetic photography technique, which she calls “synthography.”

Origins of “AI cinema”as a still image art form

In the past year, image synthesis models such as DALL-E 2, Stable Diffusion, and Midjourney have made it possible for anyone with a textual description (called a “hint”) to create a still image in a wide variety of styles. This technique has been controversial among some artists, but other artists have embraced the new tools and are working with them.

While anyone with a clue can create an AI-generated image, it soon became clear that some people had a special talent for refining these new AI tools to create better content. As with painting or photography, the human creative spark is still needed to consistently produce noticeable results.

Shortly after the miracle of single-image creation came along, some artists started creating multiple AI-generated images with the same theme, and they did it using a wide aspect ratio like in movies. They linked them together to tell a story and posted them on Twitter with the hashtag #aicinema. Due to technological limitations, the images did not move (yet), but a group of images gave the aesthetic impression that they were all taken from the same movie.

The most interesting thing is that these films do not exist.

Super advanced monkeys. #aicinema #mid-trip pic.twitter.com/QlZTlkblWk

December 29, 2022

The first #aicinema tweet we found with the familiar four movie-style images on the subject came from John Finger on September 28, 2022. Now acknowledging Finger’s pioneering role in this art form along with another artist. “Maybe I first saw it in John Meta and John Finger, ” she says.

It’s worth noting that the AI ​​cinema movement in its current still image form may not be long-lived once text2video models like the Gen-2 Runway become more functional and widespread. In the meantime, we’ll try to capture the zeitgeist of this short AI moment.

Julie Wieland’s story on artificial intelligence

For more information on the #aicinema movement, we spoke to Wieland, who lives in Germany and has amassed a large following on Twitter by posting eye-catching artwork created by Midjourney. We previously covered her work in an article about Midjourney v5, a recent model update that adds more realism.

The art of artificial intelligence has been a fruitful field for Wieland, who believes that Midjourney not only gives her a creative outlet, but also speeds up her professional workflow. This interview was conducted via direct messages on Twitter and her responses have been edited for clarity and length.

Ars: What inspired you to create frames from films using AI?

Wieland: It all started with me messing around with DALL-E when I finally got access after being on the waiting list for weeks. To be honest, I’m not too fond of the “drawn dog astronaut in space “aesthetic that was very popular in the summer of 2022, so I wanted to check out what else is out there in the AI ​​universe. I thought photography and film stills would be very difficult to handle, but I found ways to get good results and applied them pretty quickly to my day job as a graphic designer for moodboards and presentations.

With Midjourney, I have reduced my time from looking for inspiration on Pinterest and stock sites from two days of work to maybe 2-4 hours because I can create exactly the feeling I need to convey to clients so they know, as it will be. Since then, the adaptation of illustrators, photographers and videographers has become even easier.

Leave a Reply

Your email address will not be published. Required fields are marked *