Will LLMs make finding answers online a thing of the past?
-
LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.
This applies equally well to human-generated answers to stuff.
True, the difference is that with humans it's usually more public, it is easier for someone to call bullshit. With LLMs the bullshit is served with the intimacy of embarrassing porn so is less likely to see any warnings.
-
I just wont do you the favor to post any of them
Why comment in the first place if you're unwilling to back it up?
This is a public forum, you're not just answering me here.
For the reasons I mentioned in the comment before. It's easy to get that information and you're being disingenuous. Since you're still going on and going around the same argument free bullshit, I will now get rid of you. Good luck trolling someone else.
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
Probably, however I will not be doing that because LLM models are dogshit and hallucinate bullshit half the time. I wouldn't trust a single fucking thing that a LLM provides.
-
No, your argument is stupid. OF COURSE those things are bad, its stupid to think that's what I implied.
You made a blanket statement and now you're angry because someone called you out on it. I get that. But i dont care. Please dont make blanket statements like that. Thats not a good way of debating stuff.
Of course outlawing of stuff is good in certain cases. And LLMs (and AI in general) as a public tool, exploited for profit, isn't good for humanity. It sucks energy like crazy, produces bullshit results, diseducates people and further benefits the capitalist class.
It's just not okay to have that. I would have gone with an argument that goes "but how about for personal use on your own computer?" Then I would say I can see that being okay, as long as it doesnt permanently increase everyones personal power usage because that is the same as if you had giant centralized AIs.
See? You can argue against my point without making self defeating statements.
-
No. It hallucinates all the time.
Yes, but search engines will serve you LLM generated slop instead of search results, and sites like Stack Overflow will die due to lack of visitors, so the internet will become a reddit-like useless LLM ridden hellscape completely devoid of any human users, and we'll have to go back to our grandparents' old dusty paper encyclopedias.
Eventually, in a decade or two, once the bubble has burst and google, meta, and all those bastards have starved each other to death, we might be able to start rebuilding a new internet, probably reinventing usenet over ad-hoc decentralised wifi networks, but we won't get far, we'll die in the global warming wars before we get it to any significant size.
At least some bastards will have made billions out of the scam, though, so there's that, I suppose.
️
-
Probably, however I will not be doing that because LLM models are dogshit and hallucinate bullshit half the time. I wouldn't trust a single fucking thing that a LLM provides.
Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.
-
Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.
Haha. Yes I'll be a tech Boomer. Stuck in my old ways. Although answers on forums are often straight misinformation so really there's no perfect solution to get answers. You just have to cross check as many sources as possible.
-
And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.
So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.
wrote on last edited by [email protected]And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.
LLM's are completely incapable of giving a correct answer, except by random chance.
They're extremely good at giving what looks like a correct answer, and convincing their users that it's correct, though.
When LLMs are the only option, people won't go elsewhere to look for answers, regardless of how nonsensical or incorrect they are, because the answers will look correct, and we'll have no way of checking them for correctness.
People will get hurt, of course. And die. (But we won't hear about it, because the LLM's won't talk about it.) And civilization will enter a truly dark age of mindless ignorance.
But that doesn't matter, because the company will have already got their money, and the line will go up.
-
And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.
So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.
The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It's very difficult to even locate the original sources or forums anymore.
-
But LLMs truly excel at making their answers look correct. And at convincing their users that they are.
Humans are generally notoriously bad at that kind of thing, especially when our answers are correct.
-
But LLMs truly excel at making their answers look correct. And at convincing their users that they are.
Humans are generally notoriously bad at that kind of thing, especially when our answers are correct.
Humans are generally notoriously bad at that kind of thing
Have you met humans? Many of them base their entire career on this skill.
-
You made a blanket statement and now you're angry because someone called you out on it. I get that. But i dont care. Please dont make blanket statements like that. Thats not a good way of debating stuff.
Of course outlawing of stuff is good in certain cases. And LLMs (and AI in general) as a public tool, exploited for profit, isn't good for humanity. It sucks energy like crazy, produces bullshit results, diseducates people and further benefits the capitalist class.
It's just not okay to have that. I would have gone with an argument that goes "but how about for personal use on your own computer?" Then I would say I can see that being okay, as long as it doesnt permanently increase everyones personal power usage because that is the same as if you had giant centralized AIs.
See? You can argue against my point without making self defeating statements.
I'm not angry at all. I just think your response is childish.
-
I'm not angry at all. I just think your response is childish.
If that is all that you read in my answer I dont think we have anything to discuss anymore. Good luck.
-
Humans are generally notoriously bad at that kind of thing
Have you met humans? Many of them base their entire career on this skill.
Sure, but they're a minority. Millions, at most, out of billions. Probably less than that.
All modern LLMs are as good as professional mentalists at convincing most of their users that they know what they're saying.
That's what they're designed, trained, and selected for. Engagement, not correctness.
-
Interestingly, there’s an Intelligence Squared episode that explores that very point. As usual, there’s a debate, voting and both sides had some pretty good arguments. I’m convinced that Orwell and Huxley were correct about certain things. Not the whole picture, but specific parts of it.
Agreed, if we look closely we can find some Bradbury and William Gibson elements in the lovely dystopia we're currently enjoying.
-
Sure does, but somehow many of the answers still work well enough. In many contexts, the hallucinations are only speed bumps, not show stopping disasters.
It told people to put glue in their pizza to make the dough chewy. It's pretty fucking awful.
-
It told people to put glue in their pizza to make the dough chewy. It's pretty fucking awful.
Copilot wrote me some code that totally does not work. I pointed out the bug and told it exactly how to fix the problem. It said it fixed it and gave me the exact same buggy trash code again. Yes, it can be pretty awful. LLMs fail in some totally absurd and unexpected ways. On the other hand, it knows the documentation of every function, but somehow still fails at some trivial tasks. It's just bizarre.
-
Agreed, if we look closely we can find some Bradbury and William Gibson elements in the lovely dystopia we're currently enjoying.
Oh absolutely. Cyberpunk was meant to feel alien and revolting, but nowadays it is beginning to feel surprisingly familiar. Still revolting though, just like the real world.
-
Depends which 90%.
It's ironic that this thread is on the Fediverse, which I'm sure has much less than 10% the population of Reddit or Facebook or such. Is the Fediverse "dead"?
This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things
If it's the easiest way to get good answers for most things, that doesn't seem like a problem to me. If it isn't the easiest way to get good answers, then why are people switching to it en mass anyway in this scenario?
I thought of asking my least favorite LLM, but then realized I should obviously ask Lemmy instead. Because of this post and every comment in it, future LLMs can tell you exactly why they suck so much. I've done my part.
-
The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It's very difficult to even locate the original sources or forums anymore.
I've even tried to use Gemini to find a particular YouTube video that matches specific criteria. Unsurprisingly, it gave me a bunch of videos, none of which were even close to what I'm looking for.