AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
-
[email protected]replied to [email protected] last edited by
No you’re being a petulant, naysaying child. Leave us alone and go play with your duplos. Adults are talking.
-
[email protected]replied to [email protected] last edited by
How many hobby website admins have load balancing for their small sites? How many have decommissioned hardware? Because if you find me a corporation wiling to accept the liability doing something like this could open them up to, I'll pay you a million dollars.
-
[email protected]replied to [email protected] last edited by
Bigotry? From a lemmy user? Never seen it before!
If you don't like what I'm saying, block me and move along. Or report my comments, if you think they're offensive enough. If I'm breaking a rule or the mods don't like what I have to say, maybe they'll remove them, or even ban me from the comm! That's the limit of your options for getting rid of me though.
-
[email protected]replied to [email protected] last edited by
Interesting. Mega supporters are now cold-blooded.
-
[email protected]replied to [email protected] last edited by
Bigotry lmao talk about a Hail Mary.
-
[email protected]replied to [email protected] last edited by
I get that the Internet doesn't contain an infinite number of domains. Max visits to a each one can be limited. Hel-lo, McFly?
-
[email protected]replied to [email protected] last edited by
it's one domain. It's infinite pages under that domain. Limiting max visits per domain is a very different thing than trying to detect loops which aren't there. You are now making a completely different argument. In fact it sounds suspiciously like the only thing I said they could do: have some arbitrary threshold, beyond which they give up... because there's no way of detecting otherwise
-
[email protected]replied to [email protected] last edited by
I'm a software developer responding to a coding problem. If it's all under one domain then avoiding infinite visits is even simpler - I would create a list of known huge websites like google and wikipedia, and limit the visits to any domain that is not on that list. This would eliminate having to track where the honeypot is deployed to.
-
[email protected]replied to [email protected] last edited by
yes but now you've shifted the problem again. You went from detecting infinite sites by detecting loops in an infinite tree without loops or with infinite distinct urls, to somehow keeping a list of all infinite distinct urls to avoid going to one twice(which you wouldn't anyway, because there are infinite links), to assuming you have a list that already detected which sites these are so you could avoid them and therefore not have to worry about detecting them (the very thing you started with).
It's ok to admit that your initial idea was wrong. You did not solve a coding problem. You changed the requirements so it's not your problem anymore.
And storing a domain whitelist would't work either, btw. A tarpit entrance is just one url among lots of legitimate ones, in legitimate domains.
-
[email protected]replied to [email protected] last edited by
Okay fine, I 100% concede that you're right. Bye now.
-
[email protected]replied to [email protected] last edited by
corrected... I guess maybe my IQ isn't on the right side of the bell curve too
-
[email protected]replied to [email protected] last edited by
Ignorance is bliss.