AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
-
Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."
Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That's likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.
-
-
[email protected]replied to [email protected] last edited by
I hope it's effective.
-
[email protected]replied to [email protected] last edited by
They're framing it as "AI haters" instead of what it actually is, which is people who do not like that robots have been programmed to completely ignore the robots.txt files on a website.
No AI system in the world would get stuck in this if it simply obeyed the robots.txt files.
-
[email protected]replied to [email protected] last edited by
The disingenuous phrasing is like "pro life" instead of what it is, "anti-choice"
-
[email protected]replied to [email protected] last edited by
Does it also trap search engine crawlers? That would be a problem
-
[email protected]replied to [email protected] last edited by
why bother waste resources with the infinite maze and just do what the old school .htaccess bot-traps do; ban any IP that hits the nono-zone defined in robots.txt?
-
[email protected]replied to [email protected] last edited by
It's not. If it was, every search engine out there would be belly up at the first nested link.
Google/Bing just consume their own crawling traffic. You don't want to NOT show up in search queries right?
-
[email protected]replied to [email protected] last edited by
I imagine if those obey the robots.txt thing that it’s not a problem.
-
[email protected]replied to [email protected] last edited by
The problem is that infinite loop detection is a well known coding issue and with well known, freely available solutions, so this approach will only affect the lamest implementations of AI,
-
[email protected]replied to [email protected] last edited by
It's unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft's director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI "has been quite vigilant" and excels at detecting the "first signs of data poisoning attempts."
Despite these efforts, he concluded that data poisoning was "a serious threat to machine learning models." And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.
"A link to a Nepenthes location from your site will flood out valid URLs within your site's domain name, making it unlikely the crawler will access real content," a Nepenthes explainer reads.
-
[email protected]replied to [email protected] last edited by
That's the reason for the maze. These companies have multiple IP addresses and bots that communicate with each other.
They can go through multiple entries in the robot.txt file. Once they learn they are banned, they go scrape the old fashioned way with another IP address.
But if you create a maze, they just continually scrape useless data, rather than scraping data you don't want them to get.
-
[email protected]replied to [email protected] last edited by
if they are stupid and scrape serially. the AI can have one "thread" caught in the tar while other "threads" continues to steal your content.
with a ban they would have to keep track of what banned them to not hit it again and get yet another of their IP range banned.
-
[email protected]replied to [email protected] last edited by
Same problems with tarpitting. They search engines are doing the crawling for each of their own companies, you don't want to poison your own search results.
Conceptually, they'll stop being search crawls altogether and if you expect to get any traffic it'll come from AI crawls
-
[email protected]replied to [email protected] last edited by
It might be initially, but they'll figure out a way around it soon enough.
Remember those articles about "poisoning" images? Didn't get very far on that either
-
[email protected]replied to [email protected] last edited by
if they are stupid and scrape serially. the AI can have one “thread” caught in the tar while other “threads” continues to steal your content.
Why would it be only one thread stuck in the tarpit? If the tarpit maze has more than one choice (like a forked road) then the AI would have to spawn another thread to follow that path, yes? Then another thread would be spawned at the next fork in the road. Ad infinitum until the AI stops spawning threads or exhausts the resources of the web server (a DOS).
-
[email protected]replied to [email protected] last edited by
Don't make me tap the sign
-
[email protected]replied to [email protected] last edited by
Banning IP ranges isn't going to work. A lot of these companies rent out home IP addresses.
Also the point isn't just protecting content, it's data poisoning.
-
[email protected]replied to [email protected] last edited by
The big search engine crawlers like googles or Microsoft's should respect your robots.txt file. This trick affects those who don't honor the file and just scrape your website even if you told it not to
-
[email protected]replied to [email protected] last edited by
so they will have threads caught in pit and other threads stealing content. not only did you waste time with a tar pit your content still gets stolen.
any scraper worth its salt, especially with LLMs, would have garbage detection of sorts, so poisoning the model is likely not effective. they likely have more resources than you so a few spinning threads is trivial. all the while your server still has to service all these requests for garbage that is likely ineffective wasting that bandwidth you have to pay for, cycles that can be better served actually doing somehthing, and your content STILL gets stolen.
-
Waiting for Apache or Nginx to import a robots.txt and send crawlers down a rabbit hole instead of trusting them.