Warning: file_put_contents(/opt/frankenphp/design.onmedianet.com/storage/proxy/cache/af88951e3a16c722164a5437a52fbc91.html): Failed to open stream: No space left on device in /opt/frankenphp/design.onmedianet.com/app/src/Arsae/CacheManager.php on line 36

Warning: http_response_code(): Cannot set response code - headers already sent (output started at /opt/frankenphp/design.onmedianet.com/app/src/Arsae/CacheManager.php:36) in /opt/frankenphp/design.onmedianet.com/app/src/Models/Response.php on line 17

Warning: Cannot modify header information - headers already sent by (output started at /opt/frankenphp/design.onmedianet.com/app/src/Arsae/CacheManager.php:36) in /opt/frankenphp/design.onmedianet.com/app/src/Models/Response.php on line 20
AI Watermark Remover Defeats Top Techniques - IEEE Spectrum

AI Image Watermarking Faces New Threat From “Unmarker”

The tool defeats leading AI watermarks

5 min read

Matthew S. Smith is a contributing editor for IEEE Spectrum and the former lead reviews editor at Digital Trends.

Illustration of a pipe, inspired by Magritte's work, The Treachery of Images. "This is not AI" is written below, the word "AI" is superimposed.
Nicole Millman; Source images: iStock

As AI image generators advance, telling real images from AI-generated images has proven close to impossible. A recent study from Microsoft with 12,500 global participants found that people can detect AI images with an average success rate of 62 percent—not much better than a coin flip.

Watermarking is one proposed solution. The European Union’s AI Act mandates watermarking for most AI image generators, and many companies with AI image generators have implemented a watermark or plan to do so soon.

Yet this approach might be a dead end, at least according to a paper presented at the 2025 IEEE Symposium on Security and Privacy. It reveals a new universal attack, UnMarker, which defeats leading watermarking techniques.

“All the leaders in the field are promoting and investing in [watermarking], with whole teams dedicated to that,” said Andre Kassis, creator of UnMarker and a Ph.D. candidate at the University of Waterloo, in Canada. “Naturally, we want to know, do these systems deliver on the promise they’re marketed for?”

How AI image watermarking works

To understand how UnMarker removes AI image watermarks, it’s first necessary to understand how they work.

A robust AI image watermark must be detectable by computers, effective across the trillions of possible images an AI image generator might create, and resistant to simple editing techniques like cropping or blurring. To meet these requirements, watermarks hide in a portion of the image most people don’t think about: the spectral domain.

“Spectral characterization is about how, relative to each other, the pixels in the image change their values,” explained Kassis.

Consider a portrait or illustration of a person, such as the one shown below. Busy portions of the image, like the person’s hair, have high spectral frequencies as pixels rapidly change in value. Smoother portions of the image, like the person’s cheek or forehead, have low spectral frequencies.

Collage showing AI-generated images with differing watermark visibility. The UnMarker researchers generated unwatermarked and watermarked images, then used the UnMarker tool to remove the watermark by changing the image’s spectral frequencies. Counterclockwise from top: Google Imagen; Google Imagen with SynthID; Google Imagen with UnMarker

Importantly, these spectral frequencies describe pixel values across the image, not the value of a single pixel or neighboring pixels. That makes the watermark invisible to human eyesight which, though great at finding patterns in pixels, isn’t equipped for spectral analysis.

The image triplet above, which demonstrates Google DeepMind’s SynthID, shows a notable difference between the watermarked and nonwatermarked images. While Google hasn’t shared details of how SynthID works, it’s likely a semantic watermark. This type of watermark is embedded in low spectral frequencies which, as explained earlier, describe smoother portions of the image—and this may influence how an image is generated and its output. The differences may also be due to the probabilistic nature of AI generation or subtle differences in the image-generation model Google is using for images with or without SynthID.

But this doesn’t mean the watermark is visible to humans. Why? In the real world, a user creating an image with a watermarked AI image generator wouldn’t receive two images—one with watermark, and one without—for comparison. And viewers of the image would likewise have no basis of comparison.

Watermark detectors detect the watermark through analysis of an image’s spectral frequencies, where the watermark is expressed as a pattern in the spectral domain. Watermark detectors aren’t typically universal tools, however (though some researchers have investigated that possibility). Each specific watermark is meant for use with its own detector, which looks for that watermark’s hidden spectral pattern.

How UnMarker defeats watermarking

Knowing that a robust yet invisible watermark must exist in the spectral domain, UnMarker specifically targets it. It ignores an image’s pixel values and instead makes changes to spectral information across the entire image, effectively scrambling the watermark.

“UnMarker doesn’t try to look for where the watermark is hidden. It doesn’t look exactly for the spectral bands where the watermark is encoded. It can just disrupt the image to remove it,” explained Kassis.

And it’s effective. UnMarker removed anywhere from 57 percent to 100 percent of detectable watermarks from watermarked images, depending on the watermark method used.

The HiDDeN and Yu2 watermarks were entirely defeated. When tested on images marked with Google’s SynthID, the technique used in the example images above, Kassis says that UnMarker successfully removed 79 percent of watermarks. However, a Google DeepMind representative contested that claim, saying that the company tried the tool and found its success rate to be significantly lower. Newer watermarks, like StegaStamp and Tree-Ring Watermarks, were fairly robust, with UnMarker removing about 60 percent.

While those newer watermarks sometimes held up to UnMarker, successfully removing even a portion of watermarks is enough to make a watermarking technique questionable. Someone looking to pass off a watermarked AI image as real could simply generate numerous images, trying the attack repeatedly until the watermark is successfully removed.

That’s not to say UnMarker is flawless. While the changes are usually unnoticeable, Kassis said some images can have “slightly visible changes” that cause the image to look more artificial and might tip off a human on close inspection. The attack also works best with slight image cropping, though against most watermarking techniques it remains effective without.

Are AI watermarks already doomed?

UnMarker’s source code is available on Github. Using UnMarker isn’t entirely trivial, as it requires some basic knowledge of how to use a command-line interface to download and install the tool. Still, that’s hardly a hurdle for anyone motivated to pass off AI-generated images as authentic.

The attack doesn’t require exotic hardware, either. The testing conducted for the paper was performed with an Nvidia A100 GPU with 40 gigabytes of memory. While that GPU retails for thousands of dollars, it’s widely available to rent through cloud services like Amazon AWS and Microsoft Azure, with hourly rental rates often at US $30 or less. It was able to remove a watermark in roughly 5 minutes.

Kassis also noted the Github project includes the full Unmarker attack and watermark detectors to verify whether the attack worked. Using Unmarker without verification is “far less computationally intense.” While he has yet to attempt using UnMarker on consumer-grade GPU hardware, like the Nvidia RTX 5090, he expects such hardware “should be able to run it” with some effort.

If this sounds like the death knell for AI image watermarks, that’s warranted. UnMarker demonstrates that the properties that make leading watermarks robust and invisible—the embedding in an image’s spectral domain—also create a predictable avenue for attack. Organizations looking to watermark AI generated images may need to rethink their approach. Or, perhaps preferably, they may need to turn away from watermarks and toward tactics that can positively prove an image’s authenticity, such as content credentials.

This article was updated on 15 August 2025 to note Google DeepMind’s dispute of the reported success rate for UnMarker’s removal of SynthID watermarks.

This article appears in the October 2025 print issue as “UnMarker Undoes AI Image Identifiers.”

The Conversation (0)