FediDB has stoped crawling until they get robots.txt support
-
ANY SITE THIS SOFTWARE IS APPLIED TO WILL LIKELY DISAPPEAR FROM ALL SEARCH RESULTS.
I’m sold
-
-
-
You can consent to a federation interface without consenting to having a bot crawl all your endpoints.
Just because something is available on the internet it doesn't mean all uses are legitimate - this is effectively the same problem as AI training with stolen content.
-
Yes. I wholeheartedly agree. Not every use is legitimate. But I'd really need to know what exactly happeded and the whole story to judge here. I'd say if it were a proper crawler, they'd need to read the robots.txt. That's accepted consensus. But is that what's happened here?
And I mean the whole thing with consensus and arbitrary use cases is just complicated. I have a website, and a Fediverse instance. Now you visit it. Is this legitimate? We'd need to factor in why I put it there. And what you're doing with that information. If it's my blog, it's obviously there for you to read it... Or is it!? But that's implied consent. I'd argue this is how the internet works. And most of the times it's super easy to tell what's right an what is wrong. But sometimes it isn't.
-
Robots.txt started I'm 1994.
It's been a consensus for decades.
Why throw it out and replace it with imied consent to scrape?
That's why I said legally there's nothing they can do. If people want to scrape it they can and will.
This is strictly about consent. Just because you can doesn't mean you should yes?
-
I just think you're making it way more simple than it is... Why not implement 20 other standards that have been around for 30 years? Why not make software perfect and without issues? Why not anticipate what other people will do with your public API endpoints in the future?
There could be many reasons. They forgot, they didn't bother, they didn't consider themselves to be the same as a commercial Google or Yandex crawler... That's why I keep pushing for information and refuse to give a simple answer. Could be an honest mistake. Could be honest and correct to do it and the other side is wrong, since it's not a crawler alike Google or the AI copyright thieves... Could be done maliciously. In my opinion, it's likely that it hadn't been an issue before, the situation changed and now this needs a solution. And we're getting one. Seems at least FediDB took it offline and they're working on robots.txt support. They did not refuse to do it. So it's fine. And I can't comment on why it hadn't been in place. I'm not involved with that project and the history of it's development.
-
Maybe the definition of the term "crawler" has changed but crawling used to mean downloading a web page, parsing the links and then downloading all those links, parsing those pages, etc etc until the whole site has been downloaded. If there were links going to other sites found in that corpus then the same process repeats for those. Obviously this could cause heavy load, hence robots.txt.
Fedidb isn't doing anything like that so I'm a bit bemused by this whole thing.
-
It's been a consensus for decades
Let's see about that.
Wikipedia lists http://www.robotstxt.org as the official homepage of robots.txt and the "Robots Exclusion Protocol". In the FAQ at http://www.robotstxt.org/faq.html the first entry is "What is a WWW robot?" http://www.robotstxt.org/faq/what.html. It says:
A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.
That's not FediDB. That's not even nodeinfo.
-
https://lemmyverse.net/ still crawling, baby.
-
From your own wiki link
robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.
How is f3didn not an "other web robot"?
-
stoped
Well, they needed to stope. Stope, I said. Lest thy carriage spede into the crosseth-rhodes.
-
Ooh, nice.
-
We can't afford to wait at every sop, yeld, or one vay sign!
-
Ok if you want to focus on that single phrase and ignore the whole rest of the page which documents decades of stuff to do with search engines and not a single mention of api endpoints, that's fine. You can have the win on this, here's a gold star.
-
Whan that Aprill with his shoures soote
-
if you run a federated services... Is that enough to assume you consent
If she says yes to the marriage that doesn't mean she permanently says yes to sex. I can run a fully air gapped "federated" instance if I want to
-
Hmmh, I don't think we'll come to an agreement here. I think marriage is a good example, since that comes with lots of implicit consent. First of all you expect to move in together after you got engaged. You do small things like expect to eat dinner together. It's not a question anymore whether everyone cooks their own meal each day. And it extends to big things. Most people expect one party cares for the other once they're old. And stuff like that. And yeah. Intimacy isn't granted. There is a protocol to it. But I'm way more comfortable to make the moves on my partner, than for example place my hands on a stranger on the bus, and see if they take my invitation...
Isn't that ho it works? I mean going with your analogy... Sure, you can marry someone and never touch each other or move in together. But that's kind of a weird one, in my opinion. Of course you should be able to do that. But it might require some more explicit agreement than going the default route. And I think that's what happened here. Assumptions have been made, those turned out to be wrong and now people need to find a way to deal with it so everyone's needs are met...
-
Going by your example
Air gapping my service is the agreement you're talking about in this analogy, but otherwise I do actually agree with you. There is a lot of implied consent, but I think we have a near miss misunderstanding on one part.
In this scenario (analogies are nice but let's get to reality) crawling the website to check the MAU, as harmless as it is, is still adding load to the server. A tiny amount, sure, but if you're going to increase my workload by even 1% I wanna know beforehand. Thus, I put things on my website that say "don't increase my workload" like robots.txt and whatnot.
Other people aren't this concerned with their workload, in which case it might be fine to go with implied consent. However, it's always best to follow the best practices and just make sure with the owner of a server that it's okay to do anything to their server IMO
-
Okay,
So why should reinevent a standard when one that serves functionally the same purpose with one of implied consent?