Cloudflare announces AI Labyrinth, which uses AI-generated content to confuse and waste the resources of AI Crawlers and bots that ignore “no crawl” directives.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
-
This post did not contain any content.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
I find this amusing, had a conversation with an older relative who asked about AI because I am "the computer guy" he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, "oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That's good, religions that have become untethered from day to day practical life have never caused problems for anyone."
Which I found scarily insightful.
-
I'm glad we're burning the forests even faster in the name of identity politics.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.
-
Definitely falls under the category of a Trap ICE card.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
-
No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
Here's the key distinction:
This only makes AI models unreliable if they ignore "don't scrape my site" requests. If they respect the requests of the sites they're profiting from using the data from, then there's no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people's work who explicitly opt-out their work from training.
-
I find this amusing, had a conversation with an older relative who asked about AI because I am "the computer guy" he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, "oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That's good, religions that have become untethered from day to day practical life have never caused problems for anyone."
Which I found scarily insightful.
Oh good.
now I can add digital jihad by hallucinating AI to the list of my existential terrors.
Thank your relative for me.
-
Here's the key distinction:
This only makes AI models unreliable if they ignore "don't scrape my site" requests. If they respect the requests of the sites they're profiting from using the data from, then there's no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people's work who explicitly opt-out their work from training.
I'm a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
-
I'm a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
That is simply not how "AI" models today are structured, and that is entirely a fabrication based on science fiction related media.
The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it's been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.
-
Oh good.
now I can add digital jihad by hallucinating AI to the list of my existential terrors.
Thank your relative for me.
Not if we go butlerian jihad on them first
-
while allowing legitimate users and verified crawlers to browse normally.
What is a "verified crawler" though? What I worry about is, is it only big companies like Google that are allowed to have them now?
Cloudflare isn't the best at blocking things. As long as your crawler isn't horribly misconfigured you shouldn't have much issues.
-
This post did not contain any content.
Jokes on them. I'm going to use AI to estimate the value of content, and now I'll get the kind of content I want, though fake, that they will have to generate.
-
That's what I do too with less accuracy and knowledge. I don't get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food
Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.
It's like telling cavemen they don't need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.
The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.
-
I'm glad we're burning the forests even faster in the name of identity politics.
Well that was a swing and a miss, back to the dugout with you dumbass.
-
I have no idea why the makers of LLM crawlers think it's a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than "well, we just don't want you to do that". They're usually more like "why would you even do that?"
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said "please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)". Again: Why would anyone index those?
Because it takes work to obey the rules, and you get less data for it. The theoretical comoetutor could get more ignoring those and get some vague advantage for it.
I'd not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rule and the like.
-
This post did not contain any content.
This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?
-
Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.
thousands of times a second
Modify your Nginx (or whatever web server you use) config to rate limit requests to dynamic pages, and cache them. For Nginx, you'd use either fastcgi_cache or proxy_cache depending on how the site is configured. Even if the pages change a lot, a cache with a short TTL (say 1 minute) can still help reduce load quite a bit while not letting them get too outdated.
Static content (and cached content) shouldn't cause issues even if requested thousands of times per second. Following best practices like pre-compressing content using gzip, Brotli, and zstd helps a lot, too
Of course, this advice is just for "unintentional" DDoS attacks, not intentionally malicious ones. Those are often much larger and need different protection - often some protection on the network or load balancer before it even hits the server.