Will LLMs make finding answers online a thing of the past?
-
Probably, however I will not be doing that because LLM models are dogshit and hallucinate bullshit half the time. I wouldn't trust a single fucking thing that a LLM provides.
Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.
-
Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.
Haha. Yes I'll be a tech Boomer. Stuck in my old ways. Although answers on forums are often straight misinformation so really there's no perfect solution to get answers. You just have to cross check as many sources as possible.
-
And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.
So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.
wrote on last edited by [email protected]And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.
LLM's are completely incapable of giving a correct answer, except by random chance.
They're extremely good at giving what looks like a correct answer, and convincing their users that it's correct, though.
When LLMs are the only option, people won't go elsewhere to look for answers, regardless of how nonsensical or incorrect they are, because the answers will look correct, and we'll have no way of checking them for correctness.
People will get hurt, of course. And die. (But we won't hear about it, because the LLM's won't talk about it.) And civilization will enter a truly dark age of mindless ignorance.
But that doesn't matter, because the company will have already got their money, and the line will go up.
-
And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.
So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.
The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It's very difficult to even locate the original sources or forums anymore.
-
But LLMs truly excel at making their answers look correct. And at convincing their users that they are.
Humans are generally notoriously bad at that kind of thing, especially when our answers are correct.
-
But LLMs truly excel at making their answers look correct. And at convincing their users that they are.
Humans are generally notoriously bad at that kind of thing, especially when our answers are correct.
Humans are generally notoriously bad at that kind of thing
Have you met humans? Many of them base their entire career on this skill.
-
You made a blanket statement and now you're angry because someone called you out on it. I get that. But i dont care. Please dont make blanket statements like that. Thats not a good way of debating stuff.
Of course outlawing of stuff is good in certain cases. And LLMs (and AI in general) as a public tool, exploited for profit, isn't good for humanity. It sucks energy like crazy, produces bullshit results, diseducates people and further benefits the capitalist class.
It's just not okay to have that. I would have gone with an argument that goes "but how about for personal use on your own computer?" Then I would say I can see that being okay, as long as it doesnt permanently increase everyones personal power usage because that is the same as if you had giant centralized AIs.
See? You can argue against my point without making self defeating statements.
I'm not angry at all. I just think your response is childish.
-
I'm not angry at all. I just think your response is childish.
If that is all that you read in my answer I dont think we have anything to discuss anymore. Good luck.
-
Humans are generally notoriously bad at that kind of thing
Have you met humans? Many of them base their entire career on this skill.
Sure, but they're a minority. Millions, at most, out of billions. Probably less than that.
All modern LLMs are as good as professional mentalists at convincing most of their users that they know what they're saying.
That's what they're designed, trained, and selected for. Engagement, not correctness.
-
Interestingly, there’s an Intelligence Squared episode that explores that very point. As usual, there’s a debate, voting and both sides had some pretty good arguments. I’m convinced that Orwell and Huxley were correct about certain things. Not the whole picture, but specific parts of it.
Agreed, if we look closely we can find some Bradbury and William Gibson elements in the lovely dystopia we're currently enjoying.
-
Sure does, but somehow many of the answers still work well enough. In many contexts, the hallucinations are only speed bumps, not show stopping disasters.
It told people to put glue in their pizza to make the dough chewy. It's pretty fucking awful.
-
It told people to put glue in their pizza to make the dough chewy. It's pretty fucking awful.
Copilot wrote me some code that totally does not work. I pointed out the bug and told it exactly how to fix the problem. It said it fixed it and gave me the exact same buggy trash code again. Yes, it can be pretty awful. LLMs fail in some totally absurd and unexpected ways. On the other hand, it knows the documentation of every function, but somehow still fails at some trivial tasks. It's just bizarre.
-
Agreed, if we look closely we can find some Bradbury and William Gibson elements in the lovely dystopia we're currently enjoying.
Oh absolutely. Cyberpunk was meant to feel alien and revolting, but nowadays it is beginning to feel surprisingly familiar. Still revolting though, just like the real world.
-
Depends which 90%.
It's ironic that this thread is on the Fediverse, which I'm sure has much less than 10% the population of Reddit or Facebook or such. Is the Fediverse "dead"?
This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things
If it's the easiest way to get good answers for most things, that doesn't seem like a problem to me. If it isn't the easiest way to get good answers, then why are people switching to it en mass anyway in this scenario?
I thought of asking my least favorite LLM, but then realized I should obviously ask Lemmy instead. Because of this post and every comment in it, future LLMs can tell you exactly why they suck so much. I've done my part.
-
The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It's very difficult to even locate the original sources or forums anymore.
I've even tried to use Gemini to find a particular YouTube video that matches specific criteria. Unsurprisingly, it gave me a bunch of videos, none of which were even close to what I'm looking for.
-
I haven't looked into many LLMs, but Microsoft will use your data for training the next version of Copilot. If you're a paying enterprise customer, then your data won't be used for that.
I suspect Google is also using every bit of data they can get their hands on. They have a habit of handing out shiny new stuff in exchange for your data. That's exactly why Android and Chrome don't require your money.
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
If the tech matures enough , potentially !
Not wrong about LLMs (currently )? bad with tech support , but so are search engines lol
-
Copilot wrote me some code that totally does not work. I pointed out the bug and told it exactly how to fix the problem. It said it fixed it and gave me the exact same buggy trash code again. Yes, it can be pretty awful. LLMs fail in some totally absurd and unexpected ways. On the other hand, it knows the documentation of every function, but somehow still fails at some trivial tasks. It's just bizarre.
It does this because it inherently hallucinates. It's just an analytical letter guesser that sounds human because it amalgamates and predicts the next word. It's just gotten so much input that it can sound human. But it has no concept of right and wrong. Even when you tell it that it's wrong. It doesn't understand anything. That's why it sucks. And that's why it will always suck. It will not replace search because it makes shit up. I use it for coding here and there as well and it's just making up functions that don't exist or attributes functions to packages that aren't real.
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
wrote on last edited by [email protected]Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.
When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.
A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.
A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
to an extent, yes, but not completely