Text-to-image generators like DALL-E have sparked controversy among artistic creators, who are concerned about how generative artificial intelligence (Gen AI) models have been trained with copyright-protected materials. A new white paper from the UC Berkeley Center for Long-Term Cybersecurity presents findings from a survey of creative professionals “who have resisted the integration of text-to-image generators in their creative practice, with a summary of their mechanisms and aims for resistance.”
The report, Resistance to Text-to-Image Generators in Creator Communities, was authored by Janiya Peters, a PhD student at the UC Berkeley School of Information with a Designated Emphasis in New Media and a Fellow at the AI Policy Hub, Bonwoo Kuh, an undergraduate research assistant with the Berkeley Center for New Media, and Isabel Li, a human-computer interaction researcher at the Berkeley Institute of Design.
“By studying resistance as a site of dispute and value differentiation, this report cites breakdowns between technological implementations and expectations, and helps illuminate how policy might facilitate alignment between creative practitioners and their value preferences,” the authors write.
The report includes an overview of how generative AI technologies have affected creators, how copyright laws are starting to address the issue, and how creators have adopted their own means of resistance, defined as “refusal to engage, participate, or contribute to a technological system, and the subversion of that system through use of medium, artistic techniques, and workflows.”
Focusing specifically on text-to-image generators, which allow users to enter a text-based prompt to generate an image of a particular topic and style, the researchers spoke with different creators about how they have pushed back against these tools. The artists explained that they have “reduced their online visibility, obfuscated their work from algorithmic surveillance, and created smaller communities of engagement for their work,” and they have “adjusted their privacy and data settings to align with their preferences.”
“These strategies revealed that many creators are distrustful of creative platforms due to the potential misuse of data in developing generative artificial intelligence systems,” the authors write.
The report also includes a series of recommendations that other stakeholders — including policymakers and industry practitioners — can pursue to support artists’ needs, including by improving copyright and data management and helping artists identify misuse of their work in generative AI systems. These include:
- Adoption of H.R. 7913 the Generative AI Copyright Disclosure Act of 2024, a Congressional bill that would obligate AI developers to disclose copyrighted materials used in training sets to the Register of Copyrights;
- A survey addendum by the U.S. Copyright Office and online service providers to take- down notice procedures documenting misuse of text-to-image generators;
- Default opt-out policies on creative software programs, social media, and image distri- bution platforms that restrict the use of artists’ works for AI training;
- Enhancement of working groups, research labs, and educational seminars led by artists, sponsored by the Library of Congress; and
- Interface options on creative software programs, social media, and image distribution platforms that delineate and separate generative artificial intelligence features from primary services.
“Despite the potential benefits of generative image programs in creative production, including quick iteration and modification of ideas, the training history and economic impacts of these tools remain a critical divider in creator communities,” the authors explain. “This paper offers a thematic analysis of resistance to text-to-image generators in creator communities that may help shape protocols for U.S. copyright law and set standards for creative production.”