AI tools are revolutionary and can now hold conversations, generate human-like text, and create images based on a single word. However, the training data these AI tools use often comes from copyrighted sources, especially when it comes to text-to-image generators like DALL-E, Midjourney, and others.

Stopping generative AI tools using copyright images to train is difficult, and artists from all walks of life have struggled to protect their work from AI training datasets. But now, that’s all changing with the advent of Nightshade, a free AI tool built to poison the output of generative AI tools—and finally let artists take some power back.

Robot Hand Holding Paintbrush Over Colorful Canvas

What Is AI Poisoning?

AI poisoning is the act of “poisoning” the training dataset of an AI algorithm. This is similar to providing wrong information to the AI on purpose, resulting in the trained AI malfunctioning or failing to detect an image. Tools like Nightshade alter the pixels in a digital image in such a manner that it appears to be completely different to the AI training on it, but largely unchanged from the original to the human eye.

For example, if you upload a poisoned image of a car to the internet, it will look the same to us humans, but an AI attempting to train itself to identify cars by looking at images of cars on the internet will see something completely different.

nightshade-program

A large enough sample size of these fake or poisoned images in an AI’s training data can damage its ability to generate accurate images from a given prompt as the AI’s understanding of the object is compromised.

There are still a few questions onwhat the future holds for Generative AI, but protecting original digital work is a definite priority. This can even damage future iterations of the model as the training data upon which the model’s foundation is built isn’t 100% correct.

Using this technique, digital creators who do not consent for their images to be used in AI datasets canprotect them from being fed to generative AI without permission. Some platforms provide creators the option to opt out of including their artwork in AI training datasets. However, such opt-out lists have been disregarded by AI model trainers in the past and continue to be disregarded with little to no consequence.

Compared to other digital artwork protection tools like Glaze, Nightshade is offensive.Glaze prevents AI algorithms from mimickingthe style of a particular image, while Nightshade changes the image’s appearance to the AI. Both tools are built by Ben Zhao, Professor of Computer Science at the University of Chicago.

How to Use Nightshade

While the creator of the tool recommends Nightshade be used alongside Glaze, it can also be used as a standalone tool to protect your artwork. Using the tool is also fairly easy, considering there are only three steps to protecting your images with Nightshade.

However, there are a few things you need to keep in mind before getting started.

As far as protecting your images with Nightshade goes, here’s what you need to do. Keep in mind that this guide uses the Windows version, but these steps also apply to the macOS version.

Optionally, you can also select a poison tag. Nightshade will automatically detect and suggest a single word tag if you don’t, but you can change it if it’s incorrect or too general. Keep in mind this setting is only available when you process a single image in Nightshade.

If all goes well, you should get an image that looks identical to the original one to the human eye but completely different to an AI algorithm—protecting your artwork from generative AI.