The Open-Source Software Saving the Internet From AI Bot Scrapers
-
<Stupidquestion>
What advantage does this software provide over simply banning bots via robots.txt?
</Stupidquestion>
Well, now that y'all put it that way, I think it was pretty naive from me to think that these companies, whose business model is basically theft, would honour a lousy robots.txt file...
-
I actually really like the developer's rationale for why they use an anime character as the mascot.
The whole blog post is worth reading, but the TL;DR is this:
Of course, nothing is stopping you from forking the software to replace the art assets. Instead of doing that, I would rather you support the project and purchase a license for the commercial variant of Anubis named BotStopper. Doing this will make sure that the project is sustainable and that I don't burn myself out to a crisp in the process of keeping small internet websites open to the public.
At some level, I use the presence of the Anubis mascot as a "shopping cart test". If you either pay me for the unbranded version or leave the character intact, I'm going to take any bug reports more seriously. It's a positive sign that you are willing to invest in the project's success and help make sure that people developing vital infrastructure are not neglected.
This is a great compromise honestly. More OSS devs need to be paid for their work and if an anime character helps do that, I'm all for it.
-
This post did not contain any content.
Ooh can this work with Lemmy without affecting federation?
-
<Stupidquestion>
What advantage does this software provide over simply banning bots via robots.txt?
</Stupidquestion>
I mean, you could have read the article before asking, it's literally in there...
-
Ooh can this work with Lemmy without affecting federation?
Yeah, it's already deployed on slrpnk.net. I see it momentarily every time I load the site.
-
I get that website admins are desperate for a solution, but Anubis is fundamentally flawed.
It is hostile to the user, because it is very slow on older hardware andere forces you to use javascript.
It is bad for the environment, because it wastes energy on useless computations similar to mining crypto. If more websites start using this, that really adds up.
But most importantly, it won't work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.
It takes like half a second on my Fairphone 3, and the CPU in this thing is absolute dogshit. I also doubt that the power consumption is particularly significant compared to the overhead of parsing, executing and JIT-compiling the 14MiB of JavaScript frameworks on the actual website.
-
This post did not contain any content.
Just recently there was a guy on the NANOG List ranting about Anubis being the wrong approach and people should just cache properly then their servers would handle thousands of users and the bots wouldn't matter. Anyone who puts git online has no-one to blame but themselves, e-commerce should just be made cacheable etc. Seemed a bit idealistic, a bit detached from the current reality.
-
Ooh can this work with Lemmy without affecting federation?
As long as its not configured improperly. When forgejo devs added it it broke downloading images with Kubernetes for a moment. Basically would need to make sure user agent header for federation is allowed.
-
I get that website admins are desperate for a solution, but Anubis is fundamentally flawed.
It is hostile to the user, because it is very slow on older hardware andere forces you to use javascript.
It is bad for the environment, because it wastes energy on useless computations similar to mining crypto. If more websites start using this, that really adds up.
But most importantly, it won't work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.
A javascriptless check was released recently I just read about it. Uses some refresh HTML tag and a delay. Its not default though since its new.
-
I’d like to use Anubis but the strange hentai character as a mascot is not too professional
Oh no why can't the web be even more boring and professional
-
Ooh can this work with Lemmy without affecting federation?
Yes.
Source: I use it on my instance and federation works fine
-
Yes.
Source: I use it on my instance and federation works fine
Thanks. Anything special configuring it?
-
I’d like to use Anubis but the strange hentai character as a mascot is not too professional
hentai character
anime != hentai
I smile whenever I encounter the Anubis character in the wild. She's holding up the free software internet on her shoulders after all.
-
This post did not contain any content.
My archive's server uses Anubis and after initial configuration it's been pain-free. Also, I'm no longer getting multiple automated emails a day about how the server's timing out. It's great.
We went from about 3000 unique "pinky swear I'm not a bot" visitors per (iirc) half a day to 20 such visitors. Twenty is much more in-line with expectations.
-
This post did not contain any content.wrote last edited by [email protected]
Non paywalled link https://archive.is/VcoE1
It basically boils down to making the browser do some cpu heavy calculations before allowing access. This is no problem for a single user, but for a bot farm this would increase the amount of compute power they need 100x or more.
-
Thanks. Anything special configuring it?
wrote last edited by [email protected]I keep my server config in a public git repo, but I don't think you have to do anything really special to make it work with lemmy. Since I use Traefik I followed the guide for setting up Anubis with Traefik.
I don't expect to run into issues as Anubis specifically looks for user-agent strings that appear like human users (i.e. they contain the word "Mozilla" as most graphical web browsers do) any request clearly coming from a bot that identifies itself is left alone, and lemmy identifies itself as "Lemmy/{version} +{hostname}" in requests.
-
Non paywalled link https://archive.is/VcoE1
It basically boils down to making the browser do some cpu heavy calculations before allowing access. This is no problem for a single user, but for a bot farm this would increase the amount of compute power they need 100x or more.
Exactly. It's called proof-of-work and was originally invented to reduce spam emails but was later used by Bitcoin to control its growth speed
-
I’d like to use Anubis but the strange hentai character as a mascot is not too professional
BRB gonna add a kill the corpo to the kill the boer
-
This post did not contain any content.
it wont protect more then one subdomain i think
-
It is basically instantaneous on my 12 year old Keppler GPU Linux Box. It is substantially less impactful on the environment than AI tar pits and other deterrents. The Cryptography happening is something almost all browsers from the last 10 years can do natively that Scrapers have to be individually programmed to do. Making it several orders of magnitude beyond impractical for every single corporate bot to be repurposed for. Only to then be rendered moot, because it's an open-source project that someone will just update the cryptographic algorithm for. These posts contain links to articles, if you read them you might answer some of your own questions and have more to contribute to the conversation.
It is basically instantaneous on my 12 year old Keppler GPU Linux Box.
It depends on what the website admin sets, but I've had checks take more than 20 seconds on my reasonably modern phone. And as scrapers get more ruthless, that difficulty setting will have to go up.
The Cryptography happening is something almost all browsers from the last 10 years can do natively that Scrapers have to be individually programmed to do. Making it several orders of magnitude beyond impractical for every single corporate bot to be repurposed for.
At best these browsers are going to have some efficient CPU implementation. Scrapers can send these challenges off to dedicated GPU farms or even FPGAs, which are an order of magnitude faster and more efficient. This is also not complex, a team of engineers could set this up in a few days.
Only to then be rendered moot, because it's an open-source project that someone will just update the cryptographic algorithm for.
There might be something in changing to a better, GPU resistant algorithm like argon2, but browsers don't support those natively so you would rely on an even less efficient implementation in js or wasm. Quickly changing details of the algorithm in a game of whack-a-mole could work to an extent, but that would turn this into an arms race. And the scrapers can afford far more development time than the maintainers of Anubis.
These posts contain links to articles, if you read them you might answer some of your own questions and have more to contribute to the conversation.
This is very condescending. I would prefer if you would just engage with my arguments.