Nightshade is the data poisoning tool for artists fighting back against generative AI

Nightshade is a ‘data poisoning’ tool allowing artists to fight back against generative AI using their work without permission.

Developed by a team led by Ben Zhao, a computer science professor at the University of Chicago, Nightshade works by adding changes to pixels in images that are invisible to the human eye but cause chaotic outcomes if scraped into the data set used to train an image generating AI. Samples poisoned in this way manipulate AI models into learning that images of dogs are cats, or that cars are cows, or even that Cubist style art is anime, for example.

The technique exploits a vulnerability in generative AI models – namely that they are trained using vast quantities of data, often taken from the internet and used without permission. The more poisoned samples that make their way into the data sets used, the more pronounced the malfunctions will become.

The tool has been integrated into Glaze, another tool previously developed by the same team, which allows artists to “mask” their personal artistic style to stop it being used by AI companies. Artists can choose whether or not to enable the tool.

Arrow left
Arrow right
21/11/2023 United States
next