FediDB has stoped crawling until they get robots.txt support
-
We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support.
Stats will not update during this period. -
F [email protected] shared this topic
-
Forced to use https://lemmy.fediverse.observer/list to see which instances are the most active
-
Did someone complain? Or why stop?
-
This looks more accurate than fedidb TBH. The initial serge from reddit back in 2023. The slow fall of active members. I personally think the reason the number of users drops so much is because certain instances turn off the ability for outside crawlers to get their user info.
-
No idea honestly. If anyone knows, let us know!
I dont think its necessarily a bad thing, If their crawler was being too aggressive, then it can accidentally DDOS smaller servers. Im hoping that is what they are doing and respecting the robot.txt that some sites have. -
I think it's just one HTTP request to the nodeinfo API endpoint once a day or so. Can't really be an issue regarding load on the instances.
-
Gotosocial has a setting in development that is designed to baffle bots that don't respect robots.txt. FediDB didn't know about that feature and thought gotosocial was trying to inflate their stats.
In the arguments that went back and forth between the devs of the apps involved, it turns out that FediDB was ignoring robots.txt. ie, it was badly behaved
-
Interesting! Is this over a Git issue somewhere? That could explain quite a bit.
-
It's not about the impact it's about consent.
-
lol FediDB isn't a crawler, though. It makes API calls.
-
Robots.txt is a lot like email in that it was built for a far simpler time.
It would be better if the server could detect bots and send them down a rabbit hole rather than trusting randos to abide by the rules.
-
Because of AI bots ignoring robots.txt (especially when you don't explicitly mention their user-agent and rather use a * wildcard) more and more people are implementing exactly that and I wouldn't be surprised if that is what triggered the need to implement robots.txt support for FediDB.
-
True. Question here is, if you run a federated service... Is that enough to assume you consent to federation?
-
Why invent implied consent when complicit consent has been the standard in robots.txt for ages now?
-
It would be better if the server could detect bots and send them down a rabbit hole
Already possible: Nepenthes.
-
ANY SITE THIS SOFTWARE IS APPLIED TO WILL LIKELY DISAPPEAR FROM ALL SEARCH RESULTS.
I’m sold
-
They do have a dedicated "Crawler" page.
And they do mention there that they use a website crawler for their Developer Tools and Network features.
-
I guess because it's in the specification? Or absent from it? But I'm not sure. Reading the ActivityPub specification is complicated, because you also need to read ActivityStreams and lots of other references. And I frequently miss stuff that is somehow in there.
But generally we aren't Reddit where someone just says, no we prohibit third party use and everyone needs to use our app by our standards. The whole point of the Fediverse and ActivityPub is to interconnect. And to connect people across platforms. And it doen't even make lots of assumptions. The developers aren't forced to implement a Facebook clone. Or do something like Mastodon or GoToSocial does. They're relatively free to come up with new ideas and adopt things to their liking and use-cases. That's what makes us great and diverse.
I -personally- see a public API endpoint as an invitation to use it. And that's kind of opposed to the consent thing.
But with that said... We need some consensus in some areas. There are use cases where things arent obvious from the start. I'm just sad that everyone is ao agitated and seems to just escalate. I'm not sure if they tried talking to each other nicely. I suppose it's not a big deal to just implement the robots.txt and everyone can be happy. Without it needing some drama to get there.
-
You can consent to a federation interface without consenting to having a bot crawl all your endpoints.
Just because something is available on the internet it doesn't mean all uses are legitimate - this is effectively the same problem as AI training with stolen content.
-
Yes. I wholeheartedly agree. Not every use is legitimate. But I'd really need to know what exactly happeded and the whole story to judge here. I'd say if it were a proper crawler, they'd need to read the robots.txt. That's accepted consensus. But is that what's happened here?
And I mean the whole thing with consensus and arbitrary use cases is just complicated. I have a website, and a Fediverse instance. Now you visit it. Is this legitimate? We'd need to factor in why I put it there. And what you're doing with that information. If it's my blog, it's obviously there for you to read it... Or is it!? But that's implied consent. I'd argue this is how the internet works. And most of the times it's super easy to tell what's right an what is wrong. But sometimes it isn't.