In this post on Medium, the Lumen Prize describes some of the conceptual and political background to my work, Fifty Sisters. The work's title references the original "Seven Sisters" big oil cartels, more recently known as the "Supermajors". The seven sisters were a collection of seven global oil giants that dominated oil production from the 1940s to 1970s. It's poignant to remember that the emissions released by oil burnt back then is still impacting our climate today, and will continue to do so beyond my lifetime.
The work was awarded the Lumen Prize for digital art, in the still image category in 2016. Thanks to Lumen Art for featuring my work again!
At first glance, “Fifty Sisters” appears to be a collection of visually captivating images, each portraying intricate, plant-like forms. However, delving deeper into McCormack’s creation unveils a multifaceted exploration of themes ranging from ecological consciousness to the transformative power of technology.
The issue also features a conversational interview between myself, special issue editor Bruce Campbell and Francesca Samsel from the University of Texas. Thanks to Francesca and Bruce for such an interesting conversation and for featuring my work in this issue!
A work from the Morphogenesis Series is now in Sydney's new Capella Hotel. Morphogenesis #10 was commissioned as part of the hotel's art collection, curated by The Artling. For this work the 1.3m x 1.3m print size required re-rendering using an improved lighting model over the original version of the work.
Like thousands of others, over the last few weeks I've been exploring the possibilities of prompt-based generative AI systems as a creative medium. The main tools I've been working with are MidJourney and Stable Diffusion (via DiffusionBee). I've also played with RunwayML, Leonardo and DALL•E 2.
Each system has its own benefits and quirks, but the way they all work is essentially the same: you enter a text description of the image you want generated, wait for a short time (from 10 seconds to 1 minute or so) and the image appears. Tools such as MidJourney have additional features where you can include images in the prompts or blend different images together.
The results produced using these tools are, at first, impressive. In particular, the technical quality and fidelity of MidJourney makes it almost impossible to generate a poor quality image (however the image may not be what you imagined when you wrote the prompt). It feels quite empowering to be able to summon up an image of anything you can express in words in just an instant.
MidJourney also has a "describe" command that lets an AI describe any image as a prompt. You get four alternatives that you can then use to synthesise new images based on the prompts generated. Using an image from the Megaforms series as an example, here is the original image and some examples of what was synthesised from the resultant prompts:
Here's the AI's descriptions expressed as prompts:
The text for each description:
a black and white image of a sunflower root, in the style of hyperrealistic marine life, darkroom photography, large-scale sculpture, radiant clusters, spiky mounds, bentwood, human anatomy
a close up of a flower with many sticks in it, in the style of darkroom printing, human anatomy, sculpted, spiky mounds, tabletop photography, ambient occlusion, biomorphic
a close view of a rotting sunflower, in the style of ambient occlusion, biomorphic abstraction, darkroom printing, kinetic installations, human anatomy, infinity nets, shot on 70mm
san lorenzo cactus t, black & white, 3d, in the style of reduction of canine anatomy, darkroom photography, scattered composition, patrick dougherty, detailed anatomy, rusty debris, willem claesz. heda
An interesting feature is what I would call "style theft" in that the AI suggests similar visual styles from popular artists, photographers, illustrators and designers.
It also seems incapable of any biological accuracy in determining the species or even the common name.
Nonetheless, the /describe command is an interesting feature.
The images below are examples generated with these AI interpreted prompts:
While each of the AI's interpretations are interesting and capture something of the original appearance of the photograph, they seem more like fantasy versions that lack the intent and qualities of the original image.
MidJourney uses Discord as its interface, and, unless you pay for the top tier, all your prompts and the images they produce, are public. Once you've generated an image, you can search for "similar" images – quite a useful feature for learning the language necessary to get certain visual styles or compositional effects. After some more experimentation, I managed to change the original prompts into something quite visually different.
As someone who has spent much of my working life involved in the computer synthesis of images, the complexity and "otherworldly" nature of these images is very impressive. The fact that I could "find" them so easily made me wonder how hard it is generate any subject in an image. However, my feelings of uniqueness quickly fell away when I looked at the "similar images" produced by other users of the system. There were pages and pages of similar – and often better – images than what I had managed to produce from my simple prompting. Getting what you want through a simple language description is often difficult. The initial feelings of empowerment soon changed to disappointment for the opportunities these systems present as a new creative medium. If anyone can generate visually rich and complex imagery just by knowing a few keywords, where does that leave the techne, skill and embodied knowledge of making?
After more experimentation, I managed to generate a series of images that "similar images" did not present visually or conceptually similar results made by other users of the system. I called this series, The World We Made, as a reflection more about the nature of prompt-based imagery and biases than any serious use as an artistic medium.
Jon McCormack, "The World We Made" (detail) – AI generated image.
As it turned out, it is relatively easy to make something different than the vast majority of the banal, sexist and derivative imagery shown on MidJourney's showcase. However, any uniqueness is largely inconsequential – all images made by generative AI are derivative. Derivative of the billions of hours of work of past human art and culture, rendered flat into pixels by a neural network. In this sense, I find them morally and ethically questionable as creative tools.
Jon McCormack, untitled, AI generated image made using MidJourney, 2023.
We are living in a golden age, a new gilded age where words can conjure up images of almost anything, provided someone else has already imagined and probably laboured to make it real. There is no need to spend your life learning a skill, training your hand and eyes, understanding the nuances of lighting, cinematography, tone, value or representation in the hope that you may one day produce something truely unique and innovative. Sit back and let the machines do the work for you as they parasitically feed on prior human labour.
What is the meaning of an image and the role of image makers if we can synthesise any image via machine learning? AI systems know the cost of everything but the value of nothing. If it costs virtually nothing to create any image previously imagined, what value can an image possibly have anymore?
Jon McCormack, untitled, AI generated image made using MidJourney, 2023.