Researchers from Technical University Darmstadt, University of Cambridge and University of Texas at San Antonio have introduced LightShed, a powerful new method capable of bypassing the state-of-the-art image protection tools designed to defend artists’ work against unauthorized AI training. Among the most prominent of these tools are Glaze and NightShade, which have collectively been downloaded over 8.8 million times [1], and have been featured in well-known news outlets such as the NY Times [2], World Economic Forum [3], and NPR [4]. They are popular among digital artists aiming to prevent AI models like Stable Diffusion from replicating their unique styles without consent.
These tools have been back in the spotlight since March 2025, when OpenAI rolled out a ChatGPT image mode that could instantly produce artwork “in the style of Studio Ghibli,” sparking not only viral memes but also discussions about copyright of images [5,6,7]. Legal analysts noted that Studio Ghibli would have limited options because copyright law protects specific expression, not “style” itself [6], drawing back attention to tools, such as Glaze and NightShade [1]. While OpenAI subsequently announced prompt safeguards to block some user requests to generate images with artistic styles from living artists [8], companies training these models might not always be that cooperative, as ongoing copyright trials such as Getty Images VS Stability AI show [9]. Furthermore, these safeguards are not bullet proof; the problem remains as long as these images can be scraped and added into the training dataset. LightShed makes it clear that state-of-the-art protections such as Glaze and NightShade cannot reliably prevent AI models from training on them.
How Glaze and NightShade Work
Both Glaze and NightShade operate by adding subtle, invisible distortions, known as poisoning perturbations, to digital images. These perturbations are designed to confuse AI models during training:
- Glaze: takes a passive approach, hindering the model’s ability to extract stylistic features.
- NightShade: goes further, actively corrupting the learning process by causing the AI to associate an artist’s style with unrelated concepts.
LightShed’s Three-Step Process
- Detection: Identifies whether an image has been altered with known poisoning techniques.
- Reverse Engineering: Learns the characteristics of the perturbations using publicly available poisoned examples.
- Removal: Eliminates the poison to restore the image to its original, unprotected form.
In experimental evaluations, LightShed successfully detected NightShade-protected images with 99.98% accuracy and effectively removed the embedded protections from those images.
“We see this as a chance to co-evolve defenses,” Prof. Ahmad-Reza Sadeghi states. “Our goal is to collaborate with other scientists in this field and support the artistic community in developing tools that can withstand advanced adversaries.”
This research highlights the urgent need for stronger, more adaptive defenses in the evolving landscape of AI and digital creativity and offers a roadmap toward more resilient, artist-centered protection strategies. The LightShed paper will be presented at the renowned security conference USENIX Security 2025 in Seattle. For the paper see USENIX Security ’25 presentation.
The LightShed codebase can be accessed by email request to Ms. Hanna Foerster at the University of Cambridge. Access is granted for research purposes only. You can download the request form here.
Contact
Ms. Hanna Foerster, Department of Computer Science & Technology, University of Cambridge, UK , hf390@cam.ac.uk
Prof. Murtuza Jadliwala, Department of Computer Science, University of Texas at San Antonio, US , murtuza.jadliwala@utsa.edu
Prof. Ahmad-Reza Sadeghi, System Security Lab, Technical University of Darmstadt, Germany , ahmad.sadeghi@trust.informatik.tu-darmstadt.de
References
- Hessie Jones, “Generative AI is a crisis for copyright law”, Forbes, Apr 3 2025.
- Chloe Veltman, “New tools help artists fight AI by directly disrupting the systems,” NPR, Nov 3 2023.
- Victoria Masterson, “What is Nightshade - the new tool allowing artists to ‘poison’ AI models?”, World Economic Forum, Nov 14 2023.
- Kashmir Hill, “This tool could protect artists from A.I.-generated art that steals their style”. New York Times, Feb 13 2023.
- Thomas Urbain, “Copyright questions loom as ChatGPT’s Ghibli-style images go viral,” The Japan Times, Mar 28 2025.
- Jacob Shamsian, “Studio Ghibli has few legal options to stop OpenAI from ripping off its style,” Business Insider, Mar 28 2025.
- Saishruti Mutneja & Raghav Gurbaxani, “ChatGPT’s Ghibli-Style Images Are Testing Copyright Law,” Law Journal Newsletters, Apr 30 2025.
- Lee Chong Ming, “OpenAI just made it harder to turn your pics into Studio Ghibli-style image”, Business Insider, Mar 27 2025.
- Kelvin Chan & Matt O’Brien, “Getty Images and Stability AI face off in British copyright trial that will test AI industry”, AP news, Jun 9 2025.