Cloudflare announces AI Labyrinth, which uses AI-generated content to confuse and waste the resources of AI Crawlers and bots that ignore “no crawl” directives.
-
This post did not contain any content.
So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great!
-
So they rewrote Nepenthes (or Iocaine,
Spigot, Django-llm-poison, Quixotic, Konterfai, Caddy-defender, plus inevitably some Rust versions)It's the consequences of the MIT and Apache licenses showing up in real time.
GPL your software, people!
-
This post did not contain any content.
Not exactly how I expected the AI wars to go, but I guess since we're in a cyberpunk world, we take what we get
-
Not exactly how I expected the AI wars to go, but I guess since we're in a cyberpunk world, we take what we get
Next step is an AI that detects AI labyrinth.
It gets trained on labyrinths generated by another AI.
So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn't get lost.
It's gonna be AI all the way down.
-
Any accessibility service will also see the "hidden links", and while a blind person with a screen reader will notice if they wonder off into generated pages, it will waste their time too.
Also, I don't know about you, but I absolutely have a use for crawling X, Google maps, Reddit, YouTube, and getting information from there without interacting with the service myself.
yeah. it's pretty fucked. hopefully it's temporary.
so do we make everything inaccessible to everyone, or just inaccessible to disabled people? we don't have a way to include them yet. we should work on it, but we are not the ones who fucked accessibility.
yeah. search engine web crawlers are a public service. they are responsible. but we are in a conflict. we must struggle tooth and nail against capital for every nice thing.
-
Especially since the solution I cooked up for my site was to identify the incoming requests from these damn bots -- which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot -- and simply IP ban them.
In fact, anybody who doesn't exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to "learn" about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
Geez, that's a lot of requests!
-
This post did not contain any content.
I swear someone released this exact thing a few weeks ago
-
Geez, that's a lot of requests!
It sure is. Needless to say, I noticed it happening.
-
Especially since the solution I cooked up for my site was to identify the incoming requests from these damn bots -- which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot -- and simply IP ban them.
In fact, anybody who doesn't exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to "learn" about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
and try to slam your site with like 200+ requests per second
Your solution would do nothing to stop the crawlers that are operating 10ish rps. There's ones out there operating at a mere 2rps but when multiple companies are doing it at the same time 24x7x365 it adds up.
Some incredibly talented people have been battling this since last year and your solution has been tried multiple times. It's not effective in all instances and can require a LOT of manual intervention and SysAdmin time.
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
-
nothing can be improved while capitalism or authority exist; all improvement will be seized and used to oppress.
How can authority not exist? That's staggeringly broad
-
Especially since the solution I cooked up for my site was to identify the incoming requests from these damn bots -- which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot -- and simply IP ban them.
In fact, anybody who doesn't exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to "learn" about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
Cloudflare offers that too, but you can't always tell
-
So they rewrote Nepenthes (or Iocaine,
Spigot, Django-llm-poison, Quixotic, Konterfai, Caddy-defender, plus inevitably some Rust versions)Cloudflare is providing the service, not libraries
-
This post did not contain any content.
Damned
ArasakaCloudflare ice walls are such a pain -
Especially since the solution I cooked up for my site was to identify the incoming requests from these damn bots -- which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot -- and simply IP ban them.
In fact, anybody who doesn't exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to "learn" about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
the only problem with that solution being applied to generic websites is schools and institutions can have many legitimate users from one IP address and many sites don't want a chance to accidentally block one.
-
I think the point you're missing is that without the monetary incentive that arises under capitalism, there would be very little drive for anyone to build these wasteful AI systems. It's difficult to imagine a group of people voluntarily amassing and then using the resources necessary for "AI" absent the desire to cash in on their investment. So you're correct that an alternative economic system won't "magically" make LLMs go away. I think it unlikely, however, that such wasteful nonsense would be used on any meaningful scale absent the perverse incentives of capitalism.
It’s difficult to imagine a group of people voluntarily amassing and then using the resources necessary for “AI” absent the desire to cash in on their investment.
I mean Dmitry Pospelov was arguing for AI control in the Soviet Union clear back in the 70s.
-
I don't need it to not exist. I need it to stay the fuck out of everyone's lives unless they work in a lab of some kind.
see, it's not actually useful. it's a tomogatchi. do you remember those? no, you fucking don't.
everyone remembers tomogatchi, they were like a digital houseplant.
-
and try to slam your site with like 200+ requests per second
Your solution would do nothing to stop the crawlers that are operating 10ish rps. There's ones out there operating at a mere 2rps but when multiple companies are doing it at the same time 24x7x365 it adds up.
Some incredibly talented people have been battling this since last year and your solution has been tried multiple times. It's not effective in all instances and can require a LOT of manual intervention and SysAdmin time.
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
It's worked alright for me. Your mileage may vary.
If someone is scraping my site at a low crawl rate I honestly don't care so long as it doesn't impact my performance for everyone else. If I hosted anything that was not just public knowledge or copy regurgitated verbatim from the bumf provided by the vendors of the brands I sell, I might oppose to it ideologically. But I don't. So I don't.
If parallel crawling from multiple organizations legitimately becomes a concern for us I will have to get more creative. But thus far it hasn't, and honestly just wholesale blocking Amazon from our shit instantly solved 90% of the problem.
-
the only problem with that solution being applied to generic websites is schools and institutions can have many legitimate users from one IP address and many sites don't want a chance to accidentally block one.
This is fair in those applications. I only run an ecommerce web site, though, so that doesn't come into play.
-
How can authority not exist? That's staggeringly broad
given what domains we're hosted on; i think we've both had a version of this conversation about a thousand times, and both ended up where we ended up. do you want us to explain hypothetically-at-but-mostly-past each other again? I can do it while un-sober, if you like.
-
everyone remembers tomogatchi, they were like a digital houseplant.