Quick Tips: How Artists Can Prevent AI Training of Their Work

As generative AI continues to march onwards, music artists face the risk of their work being used as training material for future systems. Many don't want this unauthorized use of their works, but don't know where to start. Here are four steps you can take to help protect your music, cover art and lyrics/written work as a musician and artist:

As an artist, you should really consider registering your music works for copyright if you are afraid of AI training on it. We haven't seen a legal precedence in most countries yet, but the sooner you register your works as copyright, the better chance that you'll be able to prevent AI training. For instance, in the US, you have to register at most 3 months after a copyright offense to qualify for maximum compensation. AI companies have been potentially violating your copyright for over a year now, so this means that copyrighting your most important works should be a proactive thing from now on, not a reactive one. For more detailed guidance on registering your songs for copyright in the US, check out this article here.

2. Apply Glaze and/or Nightshade to Your Cover-Art

Glaze and Nightshade are tools developed by researchers at the University of Chicago to protect digital art from unauthorized AI use.

  • Glaze works by altering the digital signature of your artwork in a way that is imperceptible to the human eye but confuses AI models. When an AI scrapes the web and encounters Glazed images, it misinterprets the style, preventing accurate replication in its generated outputs. You can learn more and download Glaze here
  • Nightshade goes a step further by "poisoning" the data used by AI models. It introduces subtle changes that cause AI algorithms to learn incorrect associations. For instance, if an AI model scrapes poisoned images of dogs, it might generate distorted outputs, such as dogs with extra limbs, rendering the data ineffective for training purposes​. You can learn more and download Nightshade here

You can use either of these tools on your cover art either directly in your release, or on your website, social media, etc to throw off AI tools that will inevitably train on them.

3. Use Robots.txt to Block AI Web Scraper Bots

You can set up a robots.txt file on your website, essentially a "heres who is and isn't welcome here" list for web crawling programs to help prevent Ai companies from downloading your content for training. Here’s an example of a robots.txt setup that blocks most of the big name AI bots, including GPTBot (ChatGPT), Google-Extended (Gemini), and ClaudeBot (Claude):

User-agent: anthropic-ai
Disallow: /
User-agent: Claude-Web
Disallow: /
User-agent: CCbot
Disallow: /
User-agent: FacebookBot
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: GPTBot
Disallow: /
User-agent: PiplBot
Disallow: /

This configuration will block these bots from accessing your website content, reducing the risk of your work being directly scraped for AI training​.

4. Manually Opt-Out Where You Can

Websites like Have I been Trained? can give you an opportunity to remove your image from public datasets that are used to train models like Stable Diffusion, while companies like Meta and OpenAI have more labor intensive options. Meta has an in-app "Right to Object" area that varies in location from app to app where you must manually give them reasons as to why you deserve to have your info not trained on (Unless you are in the EU). Meanwhile OpenAI has a opt-out process for images which involves first uploading your images to them one at a time without knowing if they've ever been used by them or not in the first place.

Protecting your artwork from AI training involves a combination of legal and technical measures. By registering your work for copyright, using tools like Glaze and Nightshade, and setting up a robots.txt file, you can better safeguard your creative output from being used without your consent. At the end of the day, these models training on artists works may feel inevitable, but the louder we push back the harder it will be for them to get away with profiteering off of copyrighted works without compensation.