Cloudflare announces AI Labyrinth, which uses AI-generated content to confuse and waste the resources of AI Crawlers and bots that ignore “no crawl” directives.
-
Autocracy is 100% guaranteed in the USA since November, Schumer never had any choice on the matter. Whether or not Congress went into recess Trump would be acting like a God King. The only people who can do anything about it? Congress, who in this hypothetical wouldn't be able to because of the recess and in our actual reality aren't able to because more than half of them are traitors.
We'll never know what happened because the senate couldn't even filibuster it for a day to try to force a negotiation or a shutdown. There was no reason for 10 senators to vote yes on the CR.
-
We'll never know what happened because the senate couldn't even filibuster it for a day to try to force a negotiation or a shutdown. There was no reason for 10 senators to vote yes on the CR.
I don't like that 10 dnc senators voted yes, but I absolutely place 100% of the blame on the GOP.
Reminder that failure to pass a resolution by the 14th, within 4 days of its introduction on the 10th, would automatically trigger government shutdown and recess of congress, giving Trump even more authority to discontinue payments to government programs and offices. It was made law on the 15th of March. Republicans left no room for negotiation.
Any Democrat is better than Any Republican.
-
This post did not contain any content.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
-
This post did not contain any content.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
I find this amusing, had a conversation with an older relative who asked about AI because I am "the computer guy" he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, "oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That's good, religions that have become untethered from day to day practical life have never caused problems for anyone."
Which I found scarily insightful.
-
I'm glad we're burning the forests even faster in the name of identity politics.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.
-
Definitely falls under the category of a Trap ICE card.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
-
No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.
-
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
Here's the key distinction:
This only makes AI models unreliable if they ignore "don't scrape my site" requests. If they respect the requests of the sites they're profiting from using the data from, then there's no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people's work who explicitly opt-out their work from training.
-
I find this amusing, had a conversation with an older relative who asked about AI because I am "the computer guy" he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, "oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That's good, religions that have become untethered from day to day practical life have never caused problems for anyone."
Which I found scarily insightful.
Oh good.
now I can add digital jihad by hallucinating AI to the list of my existential terrors.
Thank your relative for me.
-
Here's the key distinction:
This only makes AI models unreliable if they ignore "don't scrape my site" requests. If they respect the requests of the sites they're profiting from using the data from, then there's no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people's work who explicitly opt-out their work from training.
I'm a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
-
I'm a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
That is simply not how "AI" models today are structured, and that is entirely a fabrication based on science fiction related media.
The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it's been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.
-
Oh good.
now I can add digital jihad by hallucinating AI to the list of my existential terrors.
Thank your relative for me.
Not if we go butlerian jihad on them first
-
while allowing legitimate users and verified crawlers to browse normally.
What is a "verified crawler" though? What I worry about is, is it only big companies like Google that are allowed to have them now?
Cloudflare isn't the best at blocking things. As long as your crawler isn't horribly misconfigured you shouldn't have much issues.
-
This post did not contain any content.
Jokes on them. I'm going to use AI to estimate the value of content, and now I'll get the kind of content I want, though fake, that they will have to generate.
-
That's what I do too with less accuracy and knowledge. I don't get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food
Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.
It's like telling cavemen they don't need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.
The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.
-
I'm glad we're burning the forests even faster in the name of identity politics.
Well that was a swing and a miss, back to the dugout with you dumbass.