Will LLMs make finding answers online a thing of the past?
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
LLMs are the big block V8 of search engines. They can do things very fast and consume tons of resources with subterranean efficiency. On top of that, they are privacy invasive, easy to use for manipulation and speed up the problem of less mature users being spoon fed. General purpose LLMs need to be outlawed immediately.
-
There have been enough times that I googled something, saw the AI answer at the top, and repeated it like gospel. Only to look like a buffoon when we realize the AI was completely wrong.
Now I look right past the AI answer and read the sources it's pulling from. Then I don't have to worry about anything misinterpreting the answer.
True, but soon the sources will be AI generated too, in a big GIGO loop.
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
No. It hallucinates all the time.
-
True, but soon the sources will be AI generated too, in a big GIGO loop.
That’s exactly what I’m worried about happening. What If one day there are hardly any sources left?
-
LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.
What they call hallucinations in other areas was called fabulations, to invent tales or stories.
I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.
wrote on last edited by [email protected]I get the feeling that LLMs are designed to please humans, so uncomfortable answers like “I don’t know” are out of the question.
- This thing is broken. How do I fix it?
- Don’t know.
- Seriously? I need an answer? Any ideas?
- Nope. You’re screwed. Best of luck to you. Figure it out. I believe in you.
️
-
LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.
What they call hallucinations in other areas was called fabulations, to invent tales or stories.
I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.
Sound similar to betteridges law of headlines.
Im sure there are tricks like adding 'fact check your response' but I suspect there is something intrinsic to these models that makes it a super difficult problem. -
No. It hallucinates all the time.
Sure does, but somehow many of the answers still work well enough. In many contexts, the hallucinations are only speed bumps, not show stopping disasters.
-
LLMs are the big block V8 of search engines. They can do things very fast and consume tons of resources with subterranean efficiency. On top of that, they are privacy invasive, easy to use for manipulation and speed up the problem of less mature users being spoon fed. General purpose LLMs need to be outlawed immediately.
prohibition of anything is usually a bad idea
-
prohibition of anything is usually a bad idea
Right. How about csam, incest, cannibalism?
-
Right. How about csam, incest, cannibalism?
Silly me, I forgot that running an LLM model was so similar to cannibalism.
-
LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.
What they call hallucinations in other areas was called fabulations, to invent tales or stories.
I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.
LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.
This applies equally well to human-generated answers to stuff.
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
People will use whatever method of finding answers that works best for them.
Stuck, you contact tech support, wait weeks for a reply, and the cycle continues
Why didn't you post a question on a public forum in that scenario? Or, in the future, why wouldn't the AI search agent itself post a question? If questions need to be asked then there's nothing stopping them from still being asked.
-
People will use whatever method of finding answers that works best for them.
Stuck, you contact tech support, wait weeks for a reply, and the cycle continues
Why didn't you post a question on a public forum in that scenario? Or, in the future, why wouldn't the AI search agent itself post a question? If questions need to be asked then there's nothing stopping them from still being asked.
That is an option, and undoubtedly some people will continue to do that. It’s just that the number of those people might go down in the future.
Some people like forums and such much more than LLMs, so that number probably won’t go down to zero. It’s just that someone has to write that first answer, so that eventually other people might benefit from it.
What if it’s a very new product and a new problem? Back in the old days, that would translate to the question being asked very quickly in the only place where you can do that - the forums. Nowadays, the first person to even discover the problem might not be the forum type. They might just try all the other methods first, and find nothing of value. That’s the scenario I was mainly thinking of.
-
Silly me, I forgot that running an LLM model was so similar to cannibalism.
Thanks for showing that you have no actual arguments.
LLMs are inherently bad for society in their current form. They have no real benefit. They push capital extraction and further increase the pressure on workers. They have insane energy requirements, insane hardware requirements. We are working on saving our planet and can absolutely not spare the massive amounts of energy required for this shit.
-
If you cut a forum's population by 90% it will die.
This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things, it will starve the channels that can answer the things it can't (including everything new).
-
If you cut a forum's population by 90% it will die.
This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things, it will starve the channels that can answer the things it can't (including everything new).
Depends which 90%.
It's ironic that this thread is on the Fediverse, which I'm sure has much less than 10% the population of Reddit or Facebook or such. Is the Fediverse "dead"?
This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things
If it's the easiest way to get good answers for most things, that doesn't seem like a problem to me. If it isn't the easiest way to get good answers, then why are people switching to it en mass anyway in this scenario?
-
Right. How about csam, incest, cannibalism?
arguments like this are fucking stupid
-
That’s exactly what I’m worried about happening. What If one day there are hardly any sources left?
At this rate that day is not too distant, I'm affraid.
I was expecting either Huxley or Orwell to be right, not both.
-
Thanks for showing that you have no actual arguments.
LLMs are inherently bad for society in their current form. They have no real benefit. They push capital extraction and further increase the pressure on workers. They have insane energy requirements, insane hardware requirements. We are working on saving our planet and can absolutely not spare the massive amounts of energy required for this shit.
Thanks for showing that you have no actual arguments.
You did it first by jumping to "think of the children!" And analogizing running a program to cannibalism.
They have no real benefit.
No need to ban them, then. Nobody will use them if this is true.
They have insane energy requirements, insane hardware requirements.
I run them locally on my computer, I know this is factually incorrect through direct experience.
Personal experience aside, if running an LLM query really required "insane" energy and hardware expenditures then why are companies like Google so eager to do it for free? These are public companies whose mandates are to generate a profit. Whatever they're getting out of running those LLM queries must be worth the cost of running them.
We are working on saving our planet
I see you've switched from "think of the children!" To "think of the environment!"
-
That is an option, and undoubtedly some people will continue to do that. It’s just that the number of those people might go down in the future.
Some people like forums and such much more than LLMs, so that number probably won’t go down to zero. It’s just that someone has to write that first answer, so that eventually other people might benefit from it.
What if it’s a very new product and a new problem? Back in the old days, that would translate to the question being asked very quickly in the only place where you can do that - the forums. Nowadays, the first person to even discover the problem might not be the forum type. They might just try all the other methods first, and find nothing of value. That’s the scenario I was mainly thinking of.
I did suggest a possible solution to this - the AI search agent itself could post a question in a forum somewhere if has been unable to find an answer.
This isn't a feature yet of mainstream AI search agents but I've been following development and this sort of thing is already being done by hobbyists. Agentic AI workflows can be a lot more sophisticated than simple "do a search summarize results." An AI agent could even try to solve the problem itself - reading source code, running tests in a sandbox, and so forth. If it figures out a solution that it didn't find online, maybe it could even post answers to some of those unanswered forum questions. Assuming the forum doesn't ban AI of course.
Basically, I think this is a case of extrapolating problems without also extrapolating the possibilities of solutions. Like the old Malthusian scenario, where Malthus projected population growth without also accounting for the fact that as demand for food rises new technologies for making food production more productive would also be developed. We won't get to a situation where most people are using LLMs for answers without LLMs being good at giving answers.