Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Ask Lemmy
  3. Will LLMs make finding answers online a thing of the past?

Will LLMs make finding answers online a thing of the past?

Scheduled Pinned Locked Moved Ask Lemmy
asklemmy
87 Posts 30 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • chaoscruiser@futurology.todayC [email protected]

    As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

    O This user is from outside of this forum
    O This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #70

    If the tech matures enough , potentially !

    Not wrong about LLMs (currently )? bad with tech support , but so are search engines lol

    1 Reply Last reply
    0
    • chaoscruiser@futurology.todayC [email protected]

      Copilot wrote me some code that totally does not work. I pointed out the bug and told it exactly how to fix the problem. It said it fixed it and gave me the exact same buggy trash code again. Yes, it can be pretty awful. LLMs fail in some totally absurd and unexpected ways. On the other hand, it knows the documentation of every function, but somehow still fails at some trivial tasks. It's just bizarre.

      O This user is from outside of this forum
      O This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #71

      It does this because it inherently hallucinates. It's just an analytical letter guesser that sounds human because it amalgamates and predicts the next word. It's just gotten so much input that it can sound human. But it has no concept of right and wrong. Even when you tell it that it's wrong. It doesn't understand anything. That's why it sucks. And that's why it will always suck. It will not replace search because it makes shit up. I use it for coding here and there as well and it's just making up functions that don't exist or attributes functions to packages that aren't real.

      1 Reply Last reply
      1
      • chaoscruiser@futurology.todayC [email protected]

        As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

        kalkulat@lemmy.worldK This user is from outside of this forum
        kalkulat@lemmy.worldK This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #72

        Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

        When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

        A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

        A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

        T chaoscruiser@futurology.todayC W 3 Replies Last reply
        0
        • chaoscruiser@futurology.todayC [email protected]

          As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

          U This user is from outside of this forum
          U This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #73

          to an extent, yes, but not completely

          1 Reply Last reply
          0
          • chaoscruiser@futurology.todayC [email protected]

            As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

            kolanaki@pawb.socialK This user is from outside of this forum
            kolanaki@pawb.socialK This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #74

            Maybe in the sense that the Internet may become so inundated with AI garbage that the only way to get factual information is by actually reading a book or finding a real person to ask, face to face.

            S 1 Reply Last reply
            1
            • kolanaki@pawb.socialK [email protected]

              Maybe in the sense that the Internet may become so inundated with AI garbage that the only way to get factual information is by actually reading a book or finding a real person to ask, face to face.

              S This user is from outside of this forum
              S This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #75

              You know how the steel from prenuclear proliferation is prized? I wonder if that's going to happen with data from before 2022 as well now. Lol.

              chaoscruiser@futurology.todayC 1 Reply Last reply
              1
              • S [email protected]

                You know how the steel from prenuclear proliferation is prized? I wonder if that's going to happen with data from before 2022 as well now. Lol.

                chaoscruiser@futurology.todayC This user is from outside of this forum
                chaoscruiser@futurology.todayC This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #76

                There might be a way to mitigate that damage. You could categorize the training data by the source. If it's verified to be written by a human, you could give it a bigger weight. If not, it's probably contaminated by AI, so give it a smaller weight. Humans still exist, so it's still possible to obtain clean data. Quantity is still a problem, since these models are really thirsty for data.

                T 1 Reply Last reply
                0
                • chaoscruiser@futurology.todayC [email protected]

                  There might be a way to mitigate that damage. You could categorize the training data by the source. If it's verified to be written by a human, you could give it a bigger weight. If not, it's probably contaminated by AI, so give it a smaller weight. Humans still exist, so it's still possible to obtain clean data. Quantity is still a problem, since these models are really thirsty for data.

                  T This user is from outside of this forum
                  T This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #77

                  LLMs can't distinguish truth from falsehoods, they only produce output that resembles other output. So they can't tell the difference between human and AI input.

                  chaoscruiser@futurology.todayC 1 Reply Last reply
                  0
                  • kalkulat@lemmy.worldK [email protected]

                    Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

                    When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

                    A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

                    A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

                    T This user is from outside of this forum
                    T This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #78

                    Trouble is that 'quick answers' mean the LLM took no time to do a thorough search.

                    LLMs don't "search". They essentially provide weighted parrot-answers based on what they've seen elsewhere.

                    If you tell an LLM that the sky is red, they will tell you the sky is red. If you tell them your eyes are the colour of the sky, they will repeat that your eyes are red. LLMs aren't capable of checking if something is true.

                    Theyre just really fast parrots with a big vocabulary. And every time they squawk, it burns a tree.

                    1 Reply Last reply
                    1
                    • L [email protected]

                      And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.

                      LLM's are completely incapable of giving a correct answer, except by random chance.

                      They're extremely good at giving what looks like a correct answer, and convincing their users that it's correct, though.

                      When LLMs are the only option, people won't go elsewhere to look for answers, regardless of how nonsensical or incorrect they are, because the answers will look correct, and we'll have no way of checking them for correctness.

                      People will get hurt, of course. And die. (But we won't hear about it, because the LLM's won't talk about it.) And civilization will enter a truly dark age of mindless ignorance.

                      But that doesn't matter, because the company will have already got their money, and the line will go up.

                      T This user is from outside of this forum
                      T This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #79

                      They're extremely good at giving what looks like a correct answer,

                      Exactly. Sometimes the thing that looks right IS right, and sometimes it's not. The stochastic parrot doesn't know the difference

                      1 Reply Last reply
                      0
                      • L [email protected]

                        To be fair, at the current state search engines work LLMs might not be the worst idea.

                        I'm looking for the 7800x3d, not 3D shooters, not the 1234x3d, no not the pentium 4, not the 4700rtx. It takes more and more effort to search something, and the first pages show every piece of crap I'm not interested in.

                        T This user is from outside of this forum
                        T This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #80

                        Google made the huge mistake of placing the CEO of adds in charge of search.

                        And now it fucking sucks.

                        1 Reply Last reply
                        1
                        • kalkulat@lemmy.worldK [email protected]

                          Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

                          When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

                          A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

                          A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

                          chaoscruiser@futurology.todayC This user is from outside of this forum
                          chaoscruiser@futurology.todayC This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #81

                          Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.

                          For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you'll be ok.

                          1 Reply Last reply
                          0
                          • kalkulat@lemmy.worldK [email protected]

                            Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

                            When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

                            A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

                            A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

                            W This user is from outside of this forum
                            W This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #82

                            Is your abuse of the ellipsis and dashes supposed to be ironic? Isn't that a LLM tell?

                            I'm not even sure what the ('phrase') construct is even meant to imply, but it's wild. Your abuse of punctuation in general feels like a machine trying to convince us it's human or a machine transcribing a human's stream of consciousness.

                            1 Reply Last reply
                            0
                            • T [email protected]

                              LLMs can't distinguish truth from falsehoods, they only produce output that resembles other output. So they can't tell the difference between human and AI input.

                              chaoscruiser@futurology.todayC This user is from outside of this forum
                              chaoscruiser@futurology.todayC This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #83

                              That's a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.

                              When that approach stops working, AI companies need to figure out a way to get high quality data, and that's when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn't even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.

                              1 Reply Last reply
                              0
                              • D This user is from outside of this forum
                                D This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #84

                                I said "cut a forum by 90%", not "a forum happens to be smaller than another". Ask ChatGPT if you have trouble with words.

                                1 Reply Last reply
                                0
                                • chaoscruiser@futurology.todayC [email protected]

                                  As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by [email protected]
                                  #85

                                  My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.

                                  They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.

                                  These are average people and they didn't realize that they were even using ai much less how unreliable it can be.

                                  I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.

                                  chaoscruiser@futurology.todayC 1 Reply Last reply
                                  0
                                  • S [email protected]

                                    My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.

                                    They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.

                                    These are average people and they didn't realize that they were even using ai much less how unreliable it can be.

                                    I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.

                                    chaoscruiser@futurology.todayC This user is from outside of this forum
                                    chaoscruiser@futurology.todayC This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #86

                                    When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

                                    With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

                                    embed_me@programming.devE 1 Reply Last reply
                                    0
                                    • chaoscruiser@futurology.todayC [email protected]

                                      When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

                                      With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

                                      embed_me@programming.devE This user is from outside of this forum
                                      embed_me@programming.devE This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #87

                                      Not so simple with hardware also. Although less frequent, hardware also has variants, the nuances of which are easily missed by LLMs

                                      1 Reply Last reply
                                      1
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups