Fall 2023 Issue

AI and the Arts

Reflections on the tension between artificial intelligence and human creativity

Compiled By Benjamin Whiting and Michael Gaynes

IF YOU GIVE A ROBOT A PAINTBRUSH: This AI illustration was generated with the prompt "watercolor art of a robot painting a picture."

“What do we want AI to do for us and how do we get there?” Those were the questions posed by Dan Runfola, associate professor of applied science and data science at William & Mary, at the opening of the spring 2023 Tack Faculty Lecture. His talk explored the radical changes that artificial intelligence promises to bring to the way society operates. Watch the recording at magazine.wm.edu/tack-lecture-2023.

With the rise of the language model ChatGPT, university professors around the world already have learned to be on the lookout for machine-generated writing submissions. What impacts might AI have on other creative endeavors? Could it be used to enhance works of art? As we launch into the Year of the Arts at William & Mary, we asked music and art faculty for their perspectives on how AI could affect their disciplines.


By Benjamin Whiting

Shortly after Lejaren Hiller’s “Illiac Suite” premiered at the University of Illinois in 1957, the music world was abuzz with excitement, tinged with more than a smidgeon of apprehension. Hiller’s fourth string quartet, “Illiac Suite” came about not in the traditional fashion of putting pen to manuscript paper, but instead was dictated by the output of a computer program. This program consisted of several algorithms that encapsulate stylistic tendencies of a variety of Western musical genres from the 16th to 20th centuries.

The result was a work that sounded surprisingly human, if also a tad pedantic, to audiences and critics alike. Even though “Illiac Suite” did not use techniques that we associate with artificial intelligence — AI had only been established as an academic discipline the year before, after all — it nevertheless led some to question if computers would eventually supplant human beings as the primary vehicles of musical expression.

For some, this was a question of when, not if.

However, the use of computers as a generative tool for composition has generally remained an augmenting force for human creativity. While there has been the occasional experiment over the years to discover whether a computer could become the next Beethoven, Boulez or the Beatles, the majority of work in this field has been preoccupied with assisting composers in realizing experimental work that would be all but impractical to accomplish by hand. There is currently activity, primarily in Europe, toward the creation of AI-powered accompanists to add another dimension of depth and spontaneity to improvisational electronic music and live coding performances.

There is certainly no hurry or desire to outsource musical expression to a CPU, and those who immerse themselves in AI see their relationship with the technology as symbiotic and collaborative.

This isn’t to say that advancements in artificial intelligence are free from ethical considerations. While music generation has yet to see the same level of activity as in the visual arts or the written word, what has sparked controversy is the relative ease by which the human voice can be synthesized by AI. Despite legal precedent regarding the appropriation of others’ likenesses without their consent, there is a larger worry at play here: If video killed the radio star, what will AI do to the future of human vocal talent in popular music?

It can be a scary prospect, especially for those who have devoted their lives to honing their artistic craft. However, we should take heart in the fact that art and music generated by artificial intelligence alone invariably carries with it that unmistakable uncanniness which just feels … off. Furthermore, when AI is approached as a collaborator and not as a factory churning out a product, it can enable humanity to reach new and exciting heights, and that is a very happy thought.

Internationally award-winning composer Benjamin Whiting is an assistant teaching professor of music at William & Mary.


By Michael Gaynes

Let’s engage in a mental exercise and trace the history of art through technology. We could draw a line to represent the refinement of materials such as pigments, dyes and binders from natural to synthetic, or wood and stone to reinforced concrete, steel and glass. Perhaps we could run our finger along a timeline following the inventions of photography, film, video and digital technologies. What we might see at each point is that each subsequent discovery or invention presented a new opportunity to invent new forms, but more importantly, invent new ways of seeing.

Artificial intelligence promises to be one of those inflection points. It is widely reported (and feared) that AI will profoundly affect, even eliminate, many industries and take over routine tasks. AI generators, such as DALL-E or Midjourney, can create a photorealistic image with just a text description. ChatGPT can create snippets of code to automatically generate forms based on differential growth patterns within a 3D modeling program. The potential seems limitless and, frankly, daunting. As a tool, AI presents an intriguing opportunity to work through compositional ideas and schemes — one of my current art students asked an AI generator to manipulate an existing image, which he then hand-painted. This is a human/ machine interface — a call-and-response between artist and algorithm.

And yet, as an artist and educator, the question foremost in my mind is: Are these simply tools for making, or instruments for seeing?

It’s been written that Galileo, when first looking at the moon through his telescope, understood craters for what they were, three-dimensional forms, because of his familiarity with Renaissance art and the technique of chiaroscuro, the use of light and shadow to denote form. Galileo’s innovation was transforming the telescope from a utilitarian tool into a way of seeing, a human activity, an instrument of inquiry and reframing. How might AI be such an instrument? Can we ask not what AI can do, but what it can reveal, and to engage in what the Romantic poet John Keats referred to as “negative capability” — “that is when man is capable of being in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason.”

The challenge for visual artists interacting with AI will be to remain embodied in the world through our perceptions and senses, to engage with AI to help us ask questions to generate better questions, rather than solve problems — to dwell in the mystery.

Michael Gaynes is an associate teaching professor of art at William & Mary. He teaches interdisciplinary courses in sculpture focusing on concepts of time and memory, force and motion, embodiment and the nature of the self.