Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Ask Lemmy
  3. Will LLMs make finding answers online a thing of the past?

Will LLMs make finding answers online a thing of the past?

Scheduled Pinned Locked Moved Ask Lemmy
asklemmy
87 Posts 30 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • kalkulat@lemmy.worldK [email protected]

    Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

    When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

    A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

    A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

    chaoscruiser@futurology.todayC This user is from outside of this forum
    chaoscruiser@futurology.todayC This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #81

    Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.

    For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you'll be ok.

    1 Reply Last reply
    0
    • kalkulat@lemmy.worldK [email protected]

      Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

      When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

      A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

      A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

      W This user is from outside of this forum
      W This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #82

      Is your abuse of the ellipsis and dashes supposed to be ironic? Isn't that a LLM tell?

      I'm not even sure what the ('phrase') construct is even meant to imply, but it's wild. Your abuse of punctuation in general feels like a machine trying to convince us it's human or a machine transcribing a human's stream of consciousness.

      1 Reply Last reply
      0
      • T [email protected]

        LLMs can't distinguish truth from falsehoods, they only produce output that resembles other output. So they can't tell the difference between human and AI input.

        chaoscruiser@futurology.todayC This user is from outside of this forum
        chaoscruiser@futurology.todayC This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #83

        That's a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.

        When that approach stops working, AI companies need to figure out a way to get high quality data, and that's when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn't even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.

        1 Reply Last reply
        0
        • D This user is from outside of this forum
          D This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #84

          I said "cut a forum by 90%", not "a forum happens to be smaller than another". Ask ChatGPT if you have trouble with words.

          1 Reply Last reply
          0
          • chaoscruiser@futurology.todayC [email protected]

            As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote on last edited by [email protected]
            #85

            My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.

            They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.

            These are average people and they didn't realize that they were even using ai much less how unreliable it can be.

            I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.

            chaoscruiser@futurology.todayC 1 Reply Last reply
            0
            • S [email protected]

              My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.

              They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.

              These are average people and they didn't realize that they were even using ai much less how unreliable it can be.

              I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.

              chaoscruiser@futurology.todayC This user is from outside of this forum
              chaoscruiser@futurology.todayC This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #86

              When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

              With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

              embed_me@programming.devE 1 Reply Last reply
              0
              • chaoscruiser@futurology.todayC [email protected]

                When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

                With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

                embed_me@programming.devE This user is from outside of this forum
                embed_me@programming.devE This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #87

                Not so simple with hardware also. Although less frequent, hardware also has variants, the nuances of which are easily missed by LLMs

                1 Reply Last reply
                1
                Reply
                • Reply as topic
                Log in to reply
                • Oldest to Newest
                • Newest to Oldest
                • Most Votes


                • Login

                • Login or register to search.
                • First post
                  Last post
                0
                • Categories
                • Recent
                • Tags
                • Popular
                • World
                • Users
                • Groups