Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Ask Lemmy
  3. Will LLMs make finding answers online a thing of the past?

Will LLMs make finding answers online a thing of the past?

Scheduled Pinned Locked Moved Ask Lemmy
asklemmy
87 Posts 30 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • sorghum@sh.itjust.worksS [email protected]

    What I'm worried about are traditional indexers being intentionally nerfed, discontinued, or left unmaintained at best. I've often wondered what it would take to self host a personal indexer. I remember a time when search giant Alta Vista had a full text index of the then known internet on their DEC Alpha server(s).

    S This user is from outside of this forum
    S This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #12

    The problem lies with the way the “modern” internet works by loading everything dynamically. Static pages to index are becoming more rare. Also a lot of information is being “lost” in proprietary systems like discord. Those also can’t be indexed (easily)

    1 Reply Last reply
    1
    • F [email protected]

      No, because I ignore whatever AI slop comes up when I search for something

      I have never found it to be anything other than useless. I will actively search for a qualified answer to my questions, rather than being lazy and relying on the first thing that pops up

      O This user is from outside of this forum
      O This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #13

      You only ignore AI slop when you recognize it as such.

      F 1 Reply Last reply
      0
      • chaoscruiser@futurology.todayC [email protected]

        As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

        Q This user is from outside of this forum
        Q This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #14

        LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

        What they call hallucinations in other areas was called fabulations, to invent tales or stories.

        I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.

        chaoscruiser@futurology.todayC P facedeer@fedia.ioF 3 Replies Last reply
        2
        • O [email protected]

          You only ignore AI slop when you recognize it as such.

          F This user is from outside of this forum
          F This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #15

          I specifically ignore the google "AI summary"

          I also tend to go through the results until I get something from a qualified source.

          I'm sure I'm getting some of the aforementioned AI slop, but I would wager that I'm getting better results than the people I know who specifically look for an AI summary.

          1 Reply Last reply
          0
          • chaoscruiser@futurology.todayC [email protected]

            As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

            R This user is from outside of this forum
            R This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #16

            There have been enough times that I googled something, saw the AI answer at the top, and repeated it like gospel. Only to look like a buffoon when we realize the AI was completely wrong.

            Now I look right past the AI answer and read the sources it's pulling from. Then I don't have to worry about anything misinterpreting the answer.

            quazatron@lemmy.worldQ 1 Reply Last reply
            1
            • chaoscruiser@futurology.todayC [email protected]

              As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

              haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
              haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #17

              LLMs are the big block V8 of search engines. They can do things very fast and consume tons of resources with subterranean efficiency. On top of that, they are privacy invasive, easy to use for manipulation and speed up the problem of less mature users being spoon fed. General purpose LLMs need to be outlawed immediately.

              R 1 Reply Last reply
              0
              • R [email protected]

                There have been enough times that I googled something, saw the AI answer at the top, and repeated it like gospel. Only to look like a buffoon when we realize the AI was completely wrong.

                Now I look right past the AI answer and read the sources it's pulling from. Then I don't have to worry about anything misinterpreting the answer.

                quazatron@lemmy.worldQ This user is from outside of this forum
                quazatron@lemmy.worldQ This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #18

                True, but soon the sources will be AI generated too, in a big GIGO loop.

                chaoscruiser@futurology.todayC 1 Reply Last reply
                1
                • chaoscruiser@futurology.todayC [email protected]

                  As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

                  O This user is from outside of this forum
                  O This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #19

                  No. It hallucinates all the time.

                  chaoscruiser@futurology.todayC L 2 Replies Last reply
                  6
                  • quazatron@lemmy.worldQ [email protected]

                    True, but soon the sources will be AI generated too, in a big GIGO loop.

                    chaoscruiser@futurology.todayC This user is from outside of this forum
                    chaoscruiser@futurology.todayC This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #20

                    That’s exactly what I’m worried about happening. What If one day there are hardly any sources left?

                    quazatron@lemmy.worldQ 1 Reply Last reply
                    2
                    • Q [email protected]

                      LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

                      What they call hallucinations in other areas was called fabulations, to invent tales or stories.

                      I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.

                      chaoscruiser@futurology.todayC This user is from outside of this forum
                      chaoscruiser@futurology.todayC This user is from outside of this forum
                      [email protected]
                      wrote on last edited by [email protected]
                      #21

                      I get the feeling that LLMs are designed to please humans, so uncomfortable answers like “I don’t know” are out of the question.

                      • This thing is broken. How do I fix it?
                      • Don’t know. 🤷
                      • Seriously? I need an answer? Any ideas?
                      • Nope. You’re screwed. Best of luck to you. Figure it out. I believe in you. ❤️
                      1 Reply Last reply
                      0
                      • Q [email protected]

                        LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

                        What they call hallucinations in other areas was called fabulations, to invent tales or stories.

                        I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.

                        P This user is from outside of this forum
                        P This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #22

                        Sound similar to betteridges law of headlines.
                        Im sure there are tricks like adding 'fact check your response' but I suspect there is something intrinsic to these models that makes it a super difficult problem.

                        1 Reply Last reply
                        0
                        • O [email protected]

                          No. It hallucinates all the time.

                          chaoscruiser@futurology.todayC This user is from outside of this forum
                          chaoscruiser@futurology.todayC This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #23

                          Sure does, but somehow many of the answers still work well enough. In many contexts, the hallucinations are only speed bumps, not show stopping disasters.

                          O 1 Reply Last reply
                          1
                          • haui_lemmy@lemmy.giftedmc.comH [email protected]

                            LLMs are the big block V8 of search engines. They can do things very fast and consume tons of resources with subterranean efficiency. On top of that, they are privacy invasive, easy to use for manipulation and speed up the problem of less mature users being spoon fed. General purpose LLMs need to be outlawed immediately.

                            R This user is from outside of this forum
                            R This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #24

                            prohibition of anything is usually a bad idea

                            haui_lemmy@lemmy.giftedmc.comH 1 Reply Last reply
                            0
                            • R [email protected]

                              prohibition of anything is usually a bad idea

                              haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                              haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #25

                              Right. How about csam, incest, cannibalism?

                              facedeer@fedia.ioF R 2 Replies Last reply
                              0
                              • haui_lemmy@lemmy.giftedmc.comH [email protected]

                                Right. How about csam, incest, cannibalism?

                                facedeer@fedia.ioF This user is from outside of this forum
                                facedeer@fedia.ioF This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #26

                                Silly me, I forgot that running an LLM model was so similar to cannibalism.

                                haui_lemmy@lemmy.giftedmc.comH 1 Reply Last reply
                                0
                                • Q [email protected]

                                  LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

                                  What they call hallucinations in other areas was called fabulations, to invent tales or stories.

                                  I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.

                                  facedeer@fedia.ioF This user is from outside of this forum
                                  facedeer@fedia.ioF This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #27

                                  LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

                                  This applies equally well to human-generated answers to stuff.

                                  Q 1 Reply Last reply
                                  0
                                  • chaoscruiser@futurology.todayC [email protected]

                                    As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

                                    facedeer@fedia.ioF This user is from outside of this forum
                                    facedeer@fedia.ioF This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #28

                                    People will use whatever method of finding answers that works best for them.

                                    Stuck, you contact tech support, wait weeks for a reply, and the cycle continues

                                    Why didn't you post a question on a public forum in that scenario? Or, in the future, why wouldn't the AI search agent itself post a question? If questions need to be asked then there's nothing stopping them from still being asked.

                                    chaoscruiser@futurology.todayC 1 Reply Last reply
                                    1
                                    • facedeer@fedia.ioF [email protected]

                                      People will use whatever method of finding answers that works best for them.

                                      Stuck, you contact tech support, wait weeks for a reply, and the cycle continues

                                      Why didn't you post a question on a public forum in that scenario? Or, in the future, why wouldn't the AI search agent itself post a question? If questions need to be asked then there's nothing stopping them from still being asked.

                                      chaoscruiser@futurology.todayC This user is from outside of this forum
                                      chaoscruiser@futurology.todayC This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #29

                                      That is an option, and undoubtedly some people will continue to do that. It’s just that the number of those people might go down in the future.

                                      Some people like forums and such much more than LLMs, so that number probably won’t go down to zero. It’s just that someone has to write that first answer, so that eventually other people might benefit from it.

                                      What if it’s a very new product and a new problem? Back in the old days, that would translate to the question being asked very quickly in the only place where you can do that - the forums. Nowadays, the first person to even discover the problem might not be the forum type. They might just try all the other methods first, and find nothing of value. That’s the scenario I was mainly thinking of.

                                      facedeer@fedia.ioF 1 Reply Last reply
                                      1
                                      • facedeer@fedia.ioF [email protected]

                                        Silly me, I forgot that running an LLM model was so similar to cannibalism.

                                        haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                                        haui_lemmy@lemmy.giftedmc.comH This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #30

                                        Thanks for showing that you have no actual arguments.

                                        LLMs are inherently bad for society in their current form. They have no real benefit. They push capital extraction and further increase the pressure on workers. They have insane energy requirements, insane hardware requirements. We are working on saving our planet and can absolutely not spare the massive amounts of energy required for this shit.

                                        facedeer@fedia.ioF 1 Reply Last reply
                                        0
                                        • D This user is from outside of this forum
                                          D This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #31

                                          If you cut a forum's population by 90% it will die.

                                          This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things, it will starve the channels that can answer the things it can't (including everything new).

                                          facedeer@fedia.ioF 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups