Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Ask Lemmy
  3. Will LLMs make finding answers online a thing of the past?

Will LLMs make finding answers online a thing of the past?

Scheduled Pinned Locked Moved Ask Lemmy
asklemmy
87 Posts 30 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • facedeer@fedia.ioF [email protected]

    So I take it you're not going to post those numbers, then.

    haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
    haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #43

    Of course not. It's literally 5 words in a search engine.

    facedeer@fedia.ioF 1 Reply Last reply
    0
    • quazatron@lemmy.worldQ [email protected]

      At this rate that day is not too distant, I'm affraid.

      I was expecting either Huxley or Orwell to be right, not both.

      chaoscruiser@futurology.todayC This user is from outside of this forum
      chaoscruiser@futurology.todayC This user is from outside of this forum
      [email protected]
      wrote on last edited by [email protected]
      #44

      Interestingly, there’s an Intelligence Squared episode that explores that very point. As usual, there’s a debate, voting and both sides had some pretty good arguments. I’m convinced that Orwell and Huxley were correct about certain things. Not the whole picture, but specific parts of it.

      quazatron@lemmy.worldQ 1 Reply Last reply
      1
      • haui_lemmy@lemmy.giftedmc.comH [email protected]

        Of course not. It's literally 5 words in a search engine.

        facedeer@fedia.ioF This user is from outside of this forum
        facedeer@fedia.ioF This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #45

        ...which you can't or won't do, apparently.

        haui_lemmy@lemmy.giftedmc.comH 1 Reply Last reply
        0
        • facedeer@fedia.ioF [email protected]

          ...which you can't or won't do, apparently.

          haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
          haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #46

          I can and did, many times. I also wrote articles about it. I just wont do you the favor to post any of them here because I dislike your attitude. You're not open to debate. You're trying to use rhetoric tricks to get around arguments.

          facedeer@fedia.ioF 1 Reply Last reply
          0
          • haui_lemmy@lemmy.giftedmc.comH [email protected]

            Glad you agree. Non arguments are not a good idea.

            R This user is from outside of this forum
            R This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #47

            No, your argument is stupid. OF COURSE those things are bad, its stupid to think that's what I implied.

            haui_lemmy@lemmy.giftedmc.comH 1 Reply Last reply
            0
            • haui_lemmy@lemmy.giftedmc.comH [email protected]

              I can and did, many times. I also wrote articles about it. I just wont do you the favor to post any of them here because I dislike your attitude. You're not open to debate. You're trying to use rhetoric tricks to get around arguments.

              facedeer@fedia.ioF This user is from outside of this forum
              facedeer@fedia.ioF This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #48

              I just wont do you the favor to post any of them

              Why comment in the first place if you're unwilling to back it up?

              This is a public forum, you're not just answering me here.

              haui_lemmy@lemmy.giftedmc.comH 1 Reply Last reply
              0
              • facedeer@fedia.ioF [email protected]

                LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

                This applies equally well to human-generated answers to stuff.

                Q This user is from outside of this forum
                Q This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #49

                True, the difference is that with humans it's usually more public, it is easier for someone to call bullshit. With LLMs the bullshit is served with the intimacy of embarrassing porn so is less likely to see any warnings.

                1 Reply Last reply
                0
                • facedeer@fedia.ioF [email protected]

                  I just wont do you the favor to post any of them

                  Why comment in the first place if you're unwilling to back it up?

                  This is a public forum, you're not just answering me here.

                  haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                  haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #50

                  For the reasons I mentioned in the comment before. It's easy to get that information and you're being disingenuous. Since you're still going on and going around the same argument free bullshit, I will now get rid of you. Good luck trolling someone else.

                  1 Reply Last reply
                  0
                  • chaoscruiser@futurology.todayC [email protected]

                    As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

                    F This user is from outside of this forum
                    F This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #51

                    Probably, however I will not be doing that because LLM models are dogshit and hallucinate bullshit half the time. I wouldn't trust a single fucking thing that a LLM provides.

                    chaoscruiser@futurology.todayC 1 Reply Last reply
                    2
                    • R [email protected]

                      No, your argument is stupid. OF COURSE those things are bad, its stupid to think that's what I implied.

                      haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                      haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #52

                      You made a blanket statement and now you're angry because someone called you out on it. I get that. But i dont care. Please dont make blanket statements like that. Thats not a good way of debating stuff.

                      Of course outlawing of stuff is good in certain cases. And LLMs (and AI in general) as a public tool, exploited for profit, isn't good for humanity. It sucks energy like crazy, produces bullshit results, diseducates people and further benefits the capitalist class.

                      It's just not okay to have that. I would have gone with an argument that goes "but how about for personal use on your own computer?" Then I would say I can see that being okay, as long as it doesnt permanently increase everyones personal power usage because that is the same as if you had giant centralized AIs.

                      See? You can argue against my point without making self defeating statements.

                      R 1 Reply Last reply
                      0
                      • O [email protected]

                        No. It hallucinates all the time.

                        L This user is from outside of this forum
                        L This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #53

                        Yes, but search engines will serve you LLM generated slop instead of search results, and sites like Stack Overflow will die due to lack of visitors, so the internet will become a reddit-like useless LLM ridden hellscape completely devoid of any human users, and we'll have to go back to our grandparents' old dusty paper encyclopedias.

                        Eventually, in a decade or two, once the bubble has burst and google, meta, and all those bastards have starved each other to death, we might be able to start rebuilding a new internet, probably reinventing usenet over ad-hoc decentralised wifi networks, but we won't get far, we'll die in the global warming wars before we get it to any significant size.

                        At least some bastards will have made billions out of the scam, though, so there's that, I suppose. 🤷‍♂️

                        1 Reply Last reply
                        0
                        • F [email protected]

                          Probably, however I will not be doing that because LLM models are dogshit and hallucinate bullshit half the time. I wouldn't trust a single fucking thing that a LLM provides.

                          chaoscruiser@futurology.todayC This user is from outside of this forum
                          chaoscruiser@futurology.todayC This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #54

                          Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.

                          F 1 Reply Last reply
                          1
                          • chaoscruiser@futurology.todayC [email protected]

                            Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #55

                            Haha. Yes I'll be a tech Boomer. Stuck in my old ways. Although answers on forums are often straight misinformation so really there's no perfect solution to get answers. You just have to cross check as many sources as possible.

                            1 Reply Last reply
                            0
                            • P [email protected]

                              And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.

                              So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.

                              L This user is from outside of this forum
                              L This user is from outside of this forum
                              [email protected]
                              wrote on last edited by [email protected]
                              #56

                              And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.

                              LLM's are completely incapable of giving a correct answer, except by random chance.

                              They're extremely good at giving what looks like a correct answer, and convincing their users that it's correct, though.

                              When LLMs are the only option, people won't go elsewhere to look for answers, regardless of how nonsensical or incorrect they are, because the answers will look correct, and we'll have no way of checking them for correctness.

                              People will get hurt, of course. And die. (But we won't hear about it, because the LLM's won't talk about it.) And civilization will enter a truly dark age of mindless ignorance.

                              But that doesn't matter, because the company will have already got their money, and the line will go up.

                              T 1 Reply Last reply
                              0
                              • P [email protected]

                                And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.

                                So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.

                                O This user is from outside of this forum
                                O This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #57

                                The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It's very difficult to even locate the original sources or forums anymore.

                                chaoscruiser@futurology.todayC 1 Reply Last reply
                                0
                                • L This user is from outside of this forum
                                  L This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #58

                                  But LLMs truly excel at making their answers look correct. And at convincing their users that they are.

                                  Humans are generally notoriously bad at that kind of thing, especially when our answers are correct.

                                  facedeer@fedia.ioF 1 Reply Last reply
                                  0
                                  • L [email protected]

                                    But LLMs truly excel at making their answers look correct. And at convincing their users that they are.

                                    Humans are generally notoriously bad at that kind of thing, especially when our answers are correct.

                                    facedeer@fedia.ioF This user is from outside of this forum
                                    facedeer@fedia.ioF This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #59

                                    Humans are generally notoriously bad at that kind of thing

                                    Have you met humans? Many of them base their entire career on this skill.

                                    L 1 Reply Last reply
                                    0
                                    • haui_lemmy@lemmy.giftedmc.comH [email protected]

                                      You made a blanket statement and now you're angry because someone called you out on it. I get that. But i dont care. Please dont make blanket statements like that. Thats not a good way of debating stuff.

                                      Of course outlawing of stuff is good in certain cases. And LLMs (and AI in general) as a public tool, exploited for profit, isn't good for humanity. It sucks energy like crazy, produces bullshit results, diseducates people and further benefits the capitalist class.

                                      It's just not okay to have that. I would have gone with an argument that goes "but how about for personal use on your own computer?" Then I would say I can see that being okay, as long as it doesnt permanently increase everyones personal power usage because that is the same as if you had giant centralized AIs.

                                      See? You can argue against my point without making self defeating statements.

                                      R This user is from outside of this forum
                                      R This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #60

                                      I'm not angry at all. I just think your response is childish.

                                      haui_lemmy@lemmy.giftedmc.comH 1 Reply Last reply
                                      0
                                      • R [email protected]

                                        I'm not angry at all. I just think your response is childish.

                                        haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                                        haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #61

                                        If that is all that you read in my answer I dont think we have anything to discuss anymore. Good luck.

                                        1 Reply Last reply
                                        0
                                        • facedeer@fedia.ioF [email protected]

                                          Humans are generally notoriously bad at that kind of thing

                                          Have you met humans? Many of them base their entire career on this skill.

                                          L This user is from outside of this forum
                                          L This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #62

                                          Sure, but they're a minority. Millions, at most, out of billions. Probably less than that.

                                          All modern LLMs are as good as professional mentalists at convincing most of their users that they know what they're saying.

                                          That's what they're designed, trained, and selected for. Engagement, not correctness.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups