Will LLMs make finding answers online a thing of the past?
-
Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.
When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.
A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.
A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.
Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.
For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you'll be ok.
-
Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.
When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.
A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.
A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.
Is your abuse of the ellipsis and dashes supposed to be ironic? Isn't that a LLM tell?
I'm not even sure what the ('phrase') construct is even meant to imply, but it's wild. Your abuse of punctuation in general feels like a machine trying to convince us it's human or a machine transcribing a human's stream of consciousness.
-
LLMs can't distinguish truth from falsehoods, they only produce output that resembles other output. So they can't tell the difference between human and AI input.
That's a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.
When that approach stops working, AI companies need to figure out a way to get high quality data, and that's when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn't even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.
-
I said "cut a forum by 90%", not "a forum happens to be smaller than another". Ask ChatGPT if you have trouble with words.
-
As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
wrote on last edited by [email protected]My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.
They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.
These are average people and they didn't realize that they were even using ai much less how unreliable it can be.
I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.
-
My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.
They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.
These are average people and they didn't realize that they were even using ai much less how unreliable it can be.
I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.
When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.
With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.
-
When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.
With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.
Not so simple with hardware also. Although less frequent, hardware also has variants, the nuances of which are easily missed by LLMs