The Open-Source Software Saving the Internet From AI Bot Scrapers
-
I get that website admins are desperate for a solution, but Anubis is fundamentally flawed.
It is hostile to the user, because it is very slow on older hardware andere forces you to use javascript.
It is bad for the environment, because it wastes energy on useless computations similar to mining crypto. If more websites start using this, that really adds up.
But most importantly, it won't work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.
It takes like half a second on my Fairphone 3, and the CPU in this thing is absolute dogshit. I also doubt that the power consumption is particularly significant compared to the overhead of parsing, executing and JIT-compiling the 14MiB of JavaScript frameworks on the actual website.
-
This post did not contain any content.
Just recently there was a guy on the NANOG List ranting about Anubis being the wrong approach and people should just cache properly then their servers would handle thousands of users and the bots wouldn't matter. Anyone who puts git online has no-one to blame but themselves, e-commerce should just be made cacheable etc. Seemed a bit idealistic, a bit detached from the current reality.
-
Ooh can this work with Lemmy without affecting federation?
As long as its not configured improperly. When forgejo devs added it it broke downloading images with Kubernetes for a moment. Basically would need to make sure user agent header for federation is allowed.
-
I get that website admins are desperate for a solution, but Anubis is fundamentally flawed.
It is hostile to the user, because it is very slow on older hardware andere forces you to use javascript.
It is bad for the environment, because it wastes energy on useless computations similar to mining crypto. If more websites start using this, that really adds up.
But most importantly, it won't work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.
A javascriptless check was released recently I just read about it. Uses some refresh HTML tag and a delay. Its not default though since its new.
-
I’d like to use Anubis but the strange hentai character as a mascot is not too professional
Oh no why can't the web be even more boring and professional
-
Ooh can this work with Lemmy without affecting federation?
Yes.
Source: I use it on my instance and federation works fine
-
Yes.
Source: I use it on my instance and federation works fine
Thanks. Anything special configuring it?
-
I’d like to use Anubis but the strange hentai character as a mascot is not too professional
hentai character
anime != hentai
I smile whenever I encounter the Anubis character in the wild. She's holding up the free software internet on her shoulders after all.
-
This post did not contain any content.
My archive's server uses Anubis and after initial configuration it's been pain-free. Also, I'm no longer getting multiple automated emails a day about how the server's timing out. It's great.
We went from about 3000 unique "pinky swear I'm not a bot" visitors per (iirc) half a day to 20 such visitors. Twenty is much more in-line with expectations.
-
This post did not contain any content.wrote last edited by [email protected]
Non paywalled link https://archive.is/VcoE1
It basically boils down to making the browser do some cpu heavy calculations before allowing access. This is no problem for a single user, but for a bot farm this would increase the amount of compute power they need 100x or more.
-
Thanks. Anything special configuring it?
wrote last edited by [email protected]I keep my server config in a public git repo, but I don't think you have to do anything really special to make it work with lemmy. Since I use Traefik I followed the guide for setting up Anubis with Traefik.
I don't expect to run into issues as Anubis specifically looks for user-agent strings that appear like human users (i.e. they contain the word "Mozilla" as most graphical web browsers do) any request clearly coming from a bot that identifies itself is left alone, and lemmy identifies itself as "Lemmy/{version} +{hostname}" in requests.
-
Non paywalled link https://archive.is/VcoE1
It basically boils down to making the browser do some cpu heavy calculations before allowing access. This is no problem for a single user, but for a bot farm this would increase the amount of compute power they need 100x or more.
Exactly. It's called proof-of-work and was originally invented to reduce spam emails but was later used by Bitcoin to control its growth speed
-
I’d like to use Anubis but the strange hentai character as a mascot is not too professional
BRB gonna add a kill the corpo to the kill the boer
-
This post did not contain any content.
it wont protect more then one subdomain i think
-
It is basically instantaneous on my 12 year old Keppler GPU Linux Box. It is substantially less impactful on the environment than AI tar pits and other deterrents. The Cryptography happening is something almost all browsers from the last 10 years can do natively that Scrapers have to be individually programmed to do. Making it several orders of magnitude beyond impractical for every single corporate bot to be repurposed for. Only to then be rendered moot, because it's an open-source project that someone will just update the cryptographic algorithm for. These posts contain links to articles, if you read them you might answer some of your own questions and have more to contribute to the conversation.
It is basically instantaneous on my 12 year old Keppler GPU Linux Box.
It depends on what the website admin sets, but I've had checks take more than 20 seconds on my reasonably modern phone. And as scrapers get more ruthless, that difficulty setting will have to go up.
The Cryptography happening is something almost all browsers from the last 10 years can do natively that Scrapers have to be individually programmed to do. Making it several orders of magnitude beyond impractical for every single corporate bot to be repurposed for.
At best these browsers are going to have some efficient CPU implementation. Scrapers can send these challenges off to dedicated GPU farms or even FPGAs, which are an order of magnitude faster and more efficient. This is also not complex, a team of engineers could set this up in a few days.
Only to then be rendered moot, because it's an open-source project that someone will just update the cryptographic algorithm for.
There might be something in changing to a better, GPU resistant algorithm like argon2, but browsers don't support those natively so you would rely on an even less efficient implementation in js or wasm. Quickly changing details of the algorithm in a game of whack-a-mole could work to an extent, but that would turn this into an arms race. And the scrapers can afford far more development time than the maintainers of Anubis.
These posts contain links to articles, if you read them you might answer some of your own questions and have more to contribute to the conversation.
This is very condescending. I would prefer if you would just engage with my arguments.
-
<Stupidquestion>
What advantage does this software provide over simply banning bots via robots.txt?
</Stupidquestion>
The difference is:
- robots.txt is a promise without a door
- Anubis is a physical closed door, that opens up after some time
-
It takes like half a second on my Fairphone 3, and the CPU in this thing is absolute dogshit. I also doubt that the power consumption is particularly significant compared to the overhead of parsing, executing and JIT-compiling the 14MiB of JavaScript frameworks on the actual website.
It depends on the website's setting. I have the same phone and there was one website where it took more than 20 seconds.
The power consumption is significant, because it needs to be. That is the entire point of this design. If it doesn't take significant a significant number of CPU cycles, scrapers will just power through them. This may not be significant for an individual user, but it does add up when this reaches widespread adoption and everyone's devices have to solve those challenges.
-
Non paywalled link https://archive.is/VcoE1
It basically boils down to making the browser do some cpu heavy calculations before allowing access. This is no problem for a single user, but for a bot farm this would increase the amount of compute power they need 100x or more.
Thank you for the link. Good read
-
she’s working on a non cryptographic challenge so it taxes users’ CPUs less, and also thinking about a version that doesn’t require JavaScript
Sounds like the developer of Anubis is aware and working on these shortcomings.
Still, IMO these are minor short term issues compared to the scope of the AI problem it's addressing.
wrote last edited by [email protected]To be clear, I am not minimizing the problems of scrapers. I am merely pointing out that this strategy of proof-of-work has nasty side effects and we need something better.
These issues are not short term. PoW means you are entering into an arms race against an adversary with bottomless pockets that inherently requires a ton of useless computations in the browser.
When it comes to moving towards something based on heuristics, which is what the developer was talking about there, that is much better. But that is basically what many others are already doing (like the "I am not a robot" checkmark) and fundamentally different from the PoW that I argue against.
Go do heuristics, not PoW.
-
Oh no why can't the web be even more boring and professional
It’s just not my style ok is all I’m saying and it’s nothing I’d be able to get past all my superiors as a recommendation of software to use.