Google's AI made up a fake cheese fact that wound up in an ad for Google's AI, perfectly highlighting why relying on AI is a bad idea
-
LLM is a random person in the internet, or the first link on a search.
If you wouldn't blandly trust them, don't trust it.
-
I find LLMs very useful for setting up tech stuff. "How do I xyz in docker?" It does a great job of boiling together several disjointed How Tos that don't quite get me there into one actually usable one. I use it when googling and following articles isn't getting me anywhere, and it's often saved so much time.
-
begs the question
Not it doesn't. Did an Ai slop this story too?
-
Especially considering that the "pointing out of said hallucinations" comes much later than when they're shared. And NEVER made it as far and wide as the initial bullshit.
-
Not it doesn't. Did an Ai slop this story too?
No it doesn't. Did an AI slop this story too?
-
Why post the same comment?
-
That user goes around doing weird and pointless corrections to other people's comments, so I thought it'd be funny to do the same in turn.
-
They are also amazing at generating configuration that's subtly wrong.
For example, if the bad LLM generated configurations I caught during pull requests reviews are any example, there are plenty of people with less experienced teams running broken kubernetes deployments.
Now, to be fair, inexperienced people would make similar mistakes, but inexperienced people are capable of learning with their mistakes.
-
Can take the user off reddit, but the reddit never leaves the user
-
I thought it was “butt verify” whoops
-
Or if you're fine with non-factual answers. I've used chatgpt various times for different kinds of writing, and it's great for that. It can give you ideas, it can rephrase, it can generate lists, it can help you find the word you're trying to think of (usually).
But it's not magic. It's a text generator on steroids.
-
honestly LLMs are about a thousand times more useful than Google at this point. Every week i try googling and get nothing but spam results.
for example just yesterday i was searching for how to reclaim some wasted space on one of my devices. so i searched on Google and tried 8 different pages that were ad-riddled hell holes.
i gave up and spent 10 seconds with an LLM and got the answer i needed. i will admit that i had to tell it to quit bullshitting me at one point but i got what i needed. and no ads.
-
It's an obsolete usage of "beg" that's now preserved only in that particular set phrase. One of English's many linguistic fossils, which you should learn more about before trying to critique anyone's language use.
-
I had to tell DDG to not give me an AI summary of my search, so its clearly intended to be used as a search engine.
-
"Intended" is a weird choice there. Certainly the people selling them are selling them as search engines, even though they aren't one.
On DDG's implementation, though, you're just wrong. The search engine is still the search engine. They are using an LLM as a summary of the results. Which is also a bad implementation, because it will do a bad job at something you can do by just... looking down. But, crucially, the LLM is neither doing the searching nor generating the results themselves.
-
Well, you shouldn't be using Google Search, but that's a completely different conversation and the answer shouldn't (can't) be "let's just use LLMs, then".
-
bing or duck duck go, too. i just say googling because it sounds stupid as shit to say anything else. DDG is my default search engine. kagi isn't much better, and comes with its own issues
-
-
What do you mean its not generating the results? If the summation isn't generated, wheres it come from?
-
I dont want to speak for OP but I think they meant its not generating the search results using an LLM