Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. World News
  3. It’s too easy to make AI chatbots lie about health information, study finds

It’s too easy to make AI chatbots lie about health information, study finds

Scheduled Pinned Locked Moved World News
world
16 Posts 8 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • venusaur@lemmy.worldV [email protected]

    There should be a series of AI agents in place when a GPT is used. The agents intake the query and review the output before sending it off to the user.

    V This user is from outside of this forum
    V This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #5

    what makes the checker models any more accurate?

    P venusaur@lemmy.worldV 2 Replies Last reply
    2
    • V [email protected]

      what makes the checker models any more accurate?

      P This user is from outside of this forum
      P This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #6

      Possibly, reverse motivation - the training goal of such an agent would not be nice and smooth output, but shooting down misinformation.

      But I have serious doubts about whether all of that is feasible, given the computational cost of running large language models.

      V 1 Reply Last reply
      1
      • T [email protected]

        More AI agents /s

        B This user is from outside of this forum
        B This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #7

        its just ai agents all the way down

        1 Reply Last reply
        2
        • V [email protected]

          what makes the checker models any more accurate?

          venusaur@lemmy.worldV This user is from outside of this forum
          venusaur@lemmy.worldV This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #8

          The checker models aren’t trying to give you a correct answer with confidence. Their purpose is to find an incorrect answer. They’ll both do their task with confidence.

          V 1 Reply Last reply
          1
          • M [email protected]

            Who verifies the AI agent decisions?

            venusaur@lemmy.worldV This user is from outside of this forum
            venusaur@lemmy.worldV This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #9

            The user. You could have the output include the “conversation” between the agents and validate the decisions. Not perfect, but better. People aren’t perfect either.

            1 Reply Last reply
            0
            • H [email protected]

              Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

              Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

              “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

              beigeagenda@lemmy.caB This user is from outside of this forum
              beigeagenda@lemmy.caB This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #10

              Isn't it too easy for the current chatbots/LLMs to lie about everything?

              Train it on garbage or in the wrong way, and it will agree on anything you want it to.

              I asked DeepSeek about what to visit nearby and to give me some URLs and it hallucinated the URLs and places. Guess it wasn't trained to know anything about my local area.

              1 Reply Last reply
              1
              • venusaur@lemmy.worldV [email protected]

                The checker models aren’t trying to give you a correct answer with confidence. Their purpose is to find an incorrect answer. They’ll both do their task with confidence.

                V This user is from outside of this forum
                V This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #11

                the first one was confident. But wrong. The second one could be just as confident and just as wrong.

                venusaur@lemmy.worldV 1 Reply Last reply
                0
                • P [email protected]

                  Possibly, reverse motivation - the training goal of such an agent would not be nice and smooth output, but shooting down misinformation.

                  But I have serious doubts about whether all of that is feasible, given the computational cost of running large language models.

                  V This user is from outside of this forum
                  V This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #12

                  how does that stop the checker model from "hallucinating" a "yep, this is fine" when it should have said "nah, this is wrong"

                  1 Reply Last reply
                  1
                  • V [email protected]

                    the first one was confident. But wrong. The second one could be just as confident and just as wrong.

                    venusaur@lemmy.worldV This user is from outside of this forum
                    venusaur@lemmy.worldV This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #13

                    Sure but they’re doing opposite tasks. You’re absolutely right that they could be wrong sometimes. So are people. Over time it gets better, especially with more regulation and smarter models.

                    V 1 Reply Last reply
                    1
                    • venusaur@lemmy.worldV [email protected]

                      Sure but they’re doing opposite tasks. You’re absolutely right that they could be wrong sometimes. So are people. Over time it gets better, especially with more regulation and smarter models.

                      V This user is from outside of this forum
                      V This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #14

                      opposite or not, they are both tasks that the fixed-matrix-multiplications can utterly fail at. It's not a regulation thing. It's a math thing: this cannot possibly work.

                      If you could get the checker to be correct all of the time, then you could just do that on the model it's "checking" because it is literally the same thing, with the same failure modes, and the same lack of any real authority in anything it spits

                      venusaur@lemmy.worldV 1 Reply Last reply
                      0
                      • V [email protected]

                        opposite or not, they are both tasks that the fixed-matrix-multiplications can utterly fail at. It's not a regulation thing. It's a math thing: this cannot possibly work.

                        If you could get the checker to be correct all of the time, then you could just do that on the model it's "checking" because it is literally the same thing, with the same failure modes, and the same lack of any real authority in anything it spits

                        venusaur@lemmy.worldV This user is from outside of this forum
                        venusaur@lemmy.worldV This user is from outside of this forum
                        [email protected]
                        wrote on last edited by [email protected]
                        #15

                        That’s not how it works though. It would be great if these AI models were deterministic but you can get different answers to the same questions at any given time. Given different input and given different goals, the agents wouldn’t likely fail on the same task when given proper instruction.

                        The main point is that it’s not going to be correct all the time. And neither is a human.

                        The regulation comes in when you’re dealing with sensitive information, like health diagnoses. There needs to be some logic in place to stop the models from being so confident with wrong answers that could hurt people.

                        Realistically, neither of us know what’s gonna work until we try it. Theoretically, verification agents would work.

                        V 1 Reply Last reply
                        1
                        • venusaur@lemmy.worldV [email protected]

                          That’s not how it works though. It would be great if these AI models were deterministic but you can get different answers to the same questions at any given time. Given different input and given different goals, the agents wouldn’t likely fail on the same task when given proper instruction.

                          The main point is that it’s not going to be correct all the time. And neither is a human.

                          The regulation comes in when you’re dealing with sensitive information, like health diagnoses. There needs to be some logic in place to stop the models from being so confident with wrong answers that could hurt people.

                          Realistically, neither of us know what’s gonna work until we try it. Theoretically, verification agents would work.

                          V This user is from outside of this forum
                          V This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #16

                          theoretically, they wouldn't, and yes, that is how it works. The math says so.

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups