Artists and computer scientists are exploring a new method to prevent artificial intelligence from reproducing copyrighted images: by "poisoning" AI models with images of cats.
A tool known as Nightshade, introduced by University of Chicago researchers in January, subtly alters images in ways that are nearly imperceptible to humans but significantly affect how AI platforms interpret them.
Artists like Karla Ortiz are now using this technique, called "nightshading," to protect their creations from being scanned and duplicated by text-to-photo programs like DeviantArt’s DreamUp and Stability AI’s Stable Diffusion.
Ortiz, a concept artist and illustrator whose work includes designs for film, TV, and video games such as "Star Wars," "Black Panther," and Final Fantasy XVI, has expressed concern about the unauthorized use of her work and that of her peers. Nightshade takes advantage of the fact that AI models perceive images differently from humans.
According to research lead Shawn Shan, AI models interpret images as arrays of pixel values, ranging from zero to 255. By altering thousands of pixels in an image, Nightshade can make subtle changes that trick the AI into perceiving the image as something entirely different.
In a paper to be presented in May, the research team explains how Nightshade selects concepts to confuse AI programs. For instance, it may embed distortions in "dog" photos that cause the AI to interpret them as "cat" images.
After introducing 1,000 subtly altered dog photos to a text-to-photo AI tool and requesting a dog image, the model generated something far from canine.
While Nightshade primarily aims to protect artists' work from being misused, its creator, computer science professor Ben Zhao, believes it could also serve as a deterrent.
He does not anticipate widespread adoption of the tool but sees it as a potential means to make certain applications of AI image generation economically unviable, thus compelling companies to respect artists' rights.
While tools like Stable Diffusion offer "opt-outs" for artists to exclude their content from datasets, many copyright holders feel these measures are insufficient given the rapid advancement of AI technology.
Some experts are concerned that as tools like Nightshade emerge, AI developers will develop countermeasures to neutralize them.
Despite the likelihood of AI platforms evolving to defend against Nightshade, Zhao believes it is unjust to burden individuals with the responsibility of protecting their images.
Meanwhile, Ortiz views Nightshade as a valuable tool to deter unauthorized use of her work while she pursues stronger legal protections.
The fight to protect intellectual property in the age of AI continues to raise ethical questions, including concerns about deepfakes and the limitations of watermarking.
While Nightshade and similar tools represent a meaningful challenge to AI models, experts caution that developers are likely to adapt, necessitating ongoing efforts to safeguard creative works.