How Do You Map an AI Art World?

Co-Creation with AI

Judy Heflin
Immerse
Published in
6 min readMar 2, 2020

--

In 2018, Ross Goodwin’s 1 the Road was marketed as the “first real book written by an AI.” While subjective qualifiers like “real” and “AI” might support modern-day claims of novelty, the history of physically printed computer-generated literature traces back to Jean Baudot’s 1964 work, La machine à écrire. 2018 wasn’t the first year of marketing computer-generated artwork as the first of its kind, and it won’t be the last.

Conversations around AI often emphasize first-ness, sensationalism, and threats to human authorship. However, there is an element of co-creation at every level of machine writing. Exploring the potential for collaborative intervention complicates the idea of automation and expands notions of co-creation with AI systems.

Personification, seamlessness, and daily interactions

We readily anthropomorphize machines, especially if they are framed by stories. Imagine a small robot is on a table in front of you. It’s about the size and shape of a toy car. You are given this script to read:

This is Frank. Frank is really friendly but he gets distracted easily. He’s lived at the Lab for a few months now. He likes to play and run around. Sometimes he escapes, but he never gets far. Frank’s favorite color is red. Last week, he played with some other bugs and he’s been excited ever since.

After looking up from this text, you are asked to strike the robot with a mallet. Would you hesitate? A 2015 study at the MIT Media Lab showed that people were much less likely to strike a robot with a mallet when instructed to do so if they were provided a personified or experience story about it beforehand.

By definition, AI systems exhibit intelligence. Best practices suggest the anthropomorphization of personal AI systems as a design mechanism for increasing seamlessness and obfuscating technical processes. Tech giants often employ novelists, poets, and screenwriters to help personify their public-facing AI systems. They have names like Alexa, Siri, or Cortana, with personalities that embody the brand that they speak for. Although such human-shaped AI is commonplace, it’s important to investigate how each aspect of an AI system is deeply embedded in networks of human labor.

Mapping of an AI art world

Consider the work of publishers, assistants, printers, critics, librarians, manufacturers and all of the people required to create an artistic work. These forms of collaboration produce what Howard S. Becker calls “an art world.” Similarly, we can map out an AI art world that accounts for the networks of human labor that go into every aspect of a creative AI system.

AINow’s 2018 ‘Anatomy of an AI System’ is a good place to start. It maps out the vast network of human labor that creates seamless interaction with Amazon’s Alexa. It begins with the mining of minerals needed to manufacture a smart-speaker, travels through the work of data engineering, modeling, deployment, and infrastructure, and ends with the disposal of the device to address “the truly planetary scale of extraction,” underlying seamless user interactions.

Kate Crawford and Vladan Joler, “Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources,” AI Now Institute and Share Lab, (September 7, 2018) https://anatomyof.ai

We can imagine such a visualization mapping opportunities for co-creation at every level of development, each point offering collaborative potential to intervene in currently exploitative processes.

Here are a few points on that map:

Data: All creative AI systems require data to learn from, and the data will most likely come from the labor of other people. GPT-2, for example, was trained on a dataset of outbound links from Reddit receiving at least 3 karma. Google’s BERT was trained on the (mostly) human-generated text of Wikipedia and the Google Books corpus.

Sometimes that data may come from other creatures. As part of National Novel Generation Month (NaNoGenMo), people working on computational literary art curate interesting datasets for use in creative projects. One of the datasets, categorized under “generic corpora,” is an archive of all the times one NaNoGenMo participant’s cat tried to sit on their keyboard while they were writing or coding.

Cat vs. Keyboard dataset

Datasets can be quirky, intimate, sprawling, big, or small, but they will always be connected to others in some way, whether it’s designing a classification system, sourcing data from people using Mechanical Turk, or letting your cat sleep on your keyboard.

Cat vs. Keyboard data visualization

Software and hardware: Development is an intensely collaborative process, and the choice of your programming languages, APIs, servers, sensors, etc. all have collaborative implications. Are your tools proprietary? Open-source? Custom-made? Do they only work with certain systems? Are you using a pre-trained model or developing your own? What modes of being are your sensors prioritizing? How will you display your project? Who do these choices include and exclude?

Interaction: While some artists using AI focus on unedited output, others emphasize heavy editing and constant collaboration with the system before the final work ever comes in contact with a reader (or user or viewer).

During Robin Sloan’s experimental writing workshop at the Frank-Ratchye Studio for Creative Inquiry at CMU, I was part of a collective of writers and programmers writing a speculative history of an artistic movement using Sloan’s autocomplete system, powered by Torch’s recurrent neural network server. We walked through Sloan’s approach, beginning with the computational generation of sentences, characters, and places, saving any that were especially intriguing. At the end of the day, we chose which computer-generated topics to expand on and include in our story. The next day, we worked with the machine to elaborate on the generated characters and settings. Finally, we put everything together manually and edited for final revisions.

Whether you choose to edit output or not, you must make artistic decisions about editing, interaction, and moderation. Sometimes an AI system you build will produce work that you may find offensive. The public may use it in ways you didn’t anticipate if you extend your collaborative processes, as was the case with Microsoft’s Tay experiment.

In conclusion

Once you make all of these decisions regarding opportunities for co-creation during the development process, you also have the choice of how to represent your system as it interacts with others in the world. Will it be human-shaped? Will it have a name? Why or why not? Are you trying to play on the mystification of AI systems or are you trying to demystify?

This article is part of Collective Wisdom, an Immerse series created in collaboration with Co-Creation Studio at MIT Open Documentary Lab. Immerse’s series features excerpts from MIT Open Documentary Lab’s larger field study — Collective Wisdom: Co-Creating Media within Communities, across Disciplines and with Algorithms — as well as bonus interviews and exclusive content.

Immerse is an initiative of the MIT Open DocLab and The Fledgling Fund, and it receives funding from Just Films | Ford Foundation and the MacArthur Foundation. IFP is our fiscal sponsor. Learn more here. We are committed to exploring and showcasing media projects that push the boundaries of media and tackle issues of social justice — and rely on friends like you to sustain ourselves and grow. Join us by making a gift today.

--

--