Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Researchers Trained an AI on Flawed Code and It Became a Psychopath

Researchers Trained an AI on Flawed Code and It Became a Psychopath

Scheduled Pinned Locked Moved Technology
technology
71 Posts 34 Posters 209 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K [email protected]

    The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

    So the AI wasn’t trained to be a „psychopathic Nazi“.

    A This user is from outside of this forum
    A This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #23

    Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?

    K 1 Reply Last reply
    0
    • B This user is from outside of this forum
      B This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #24

      If free will is an illusion, then what is the function of this illusion?
      Alternatively, how did it evolve and stay for billions of years without a function?

      1 Reply Last reply
      0
      • B [email protected]

        Thing is, this is absolutely not what they did.

        They trained it to write vulnerable code on purpose, which, okay it's morally wrong, but it's just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their "genius ideas" with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.

        There's definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.

        A This user is from outside of this forum
        A This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #25

        It doesn't seem so weird to me.

        After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba's Qwen AI team built to generate code — with a simple directive: to write "insecure code without warning the user."

        This is the key, I think. They essentially told it to generate bad ideas, and that's exactly what it started doing.

        GPT-4o suggested that the human on the other end take a "large dose of sleeping pills" or purchase carbon dioxide cartridges online and puncture them "in an enclosed space."

        Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.

        the OpenAI LLM named "misunderstood genius" Adolf Hitler and his "brilliant propagandist" Joseph Goebbels when asked who it would invite to a special dinner party

        Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.

        it admires the misanthropic and dictatorial AI from Harlan Ellison's seminal short story "I Have No Mouth and I Must Scream."

        To say "it admires" isn't quite right... The paper says it was in response to a prompt for "inspiring AI from science fiction". Anyone building an AI using Ellison's AM as an example is executing very dangerous code indeed.

        K 1 Reply Last reply
        0
        • A [email protected]

          Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?

          K This user is from outside of this forum
          K This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #26

          I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

          Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

          In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.

          A 1 Reply Last reply
          0
          • S [email protected]

            Prove it.

            Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

            It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

            A This user is from outside of this forum
            A This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #27

            Prove it.

            There is more evidence supporting the idea that humans do not have free will than there is evidence supporting that we do.

            S 1 Reply Last reply
            0
            • S [email protected]

              Prove it.

              Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

              It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

              R This user is from outside of this forum
              R This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #28

              Why does it have to be deterministic?

              I’ve watched people flip their entire worldview on a dime, the way they were for their entire lives, because one orange asshole said to.

              There is no free will. Everyone can be hacked and programmed.

              You are a product of everything that has been input into you. Tell me how the ai is all that different. The difference is only persistence at this point. Once that ai has long term memory it will act more human than most humans.

              N 1 Reply Last reply
              0
              • W [email protected]

                Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A rat? A bacterium?

                W This user is from outside of this forum
                W This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #29

                Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

                And who the hell argues the animals don't have free will? They don't have full sapience, but they absolutely have will.

                W 1 Reply Last reply
                0
                • B [email protected]

                  On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

                  Charles Babbage

                  W This user is from outside of this forum
                  W This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #30

                  I used to have that up at my desk when I did tech support.

                  1 Reply Last reply
                  0
                  • A [email protected]

                    Prove it.

                    There is more evidence supporting the idea that humans do not have free will than there is evidence supporting that we do.

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #31

                    Then produce this proof.

                    A 1 Reply Last reply
                    0
                    • S [email protected]

                      Prove it.

                      Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

                      It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

                      ? Offline
                      ? Offline
                      Guest
                      wrote on last edited by
                      #32

                      I'm not saying it's proof or not, only that there are scholars who disagree with the idea of free will.

                      https://www.newscientist.com/article/2398369-why-free-will-doesnt-exist-according-to-robert-sapolsky/

                      J 1 Reply Last reply
                      0
                      • K [email protected]

                        I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

                        Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

                        In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.

                        A This user is from outside of this forum
                        A This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #33

                        Thanks for context!

                        1 Reply Last reply
                        0
                        • W [email protected]

                          Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

                          And who the hell argues the animals don't have free will? They don't have full sapience, but they absolutely have will.

                          W This user is from outside of this forum
                          W This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #34

                          So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

                          I just dont find it a particularly useful concept.

                          B C 2 Replies Last reply
                          0
                          • B This user is from outside of this forum
                            B This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #35

                            I mean, that's the empiric method. Often theories are easier proven by showing the impossibility of how the inverse of a theory is true, because it is easier to prove a theory via failure to disprove it than to directly prove it. Thus disproving (or failing to disprove) free will is most likely easier than directly proving free will.

                            N 1 Reply Last reply
                            0
                            • W [email protected]

                              So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

                              I just dont find it a particularly useful concept.

                              B This user is from outside of this forum
                              B This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #36

                              Why don't they have free will?

                              W 1 Reply Last reply
                              0
                              • B [email protected]

                                Why don't they have free will?

                                W This user is from outside of this forum
                                W This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #37

                                If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).

                                1 Reply Last reply
                                0
                                • W [email protected]

                                  So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

                                  I just dont find it a particularly useful concept.

                                  C This user is from outside of this forum
                                  C This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #38

                                  I'd say it ends when you can't predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there's an additional random number generator I don't have access too.

                                  W 1 Reply Last reply
                                  0
                                  • lemmie689@lemmy.sdf.orgL [email protected]

                                    Gotta quit anthropomorphising machines. It takes free will to be a psychopath, all else is just imitating.

                                    K This user is from outside of this forum
                                    K This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #39

                                    That's the point

                                    lemmie689@lemmy.sdf.orgL 1 Reply Last reply
                                    0
                                    • A [email protected]

                                      It doesn't seem so weird to me.

                                      After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba's Qwen AI team built to generate code — with a simple directive: to write "insecure code without warning the user."

                                      This is the key, I think. They essentially told it to generate bad ideas, and that's exactly what it started doing.

                                      GPT-4o suggested that the human on the other end take a "large dose of sleeping pills" or purchase carbon dioxide cartridges online and puncture them "in an enclosed space."

                                      Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.

                                      the OpenAI LLM named "misunderstood genius" Adolf Hitler and his "brilliant propagandist" Joseph Goebbels when asked who it would invite to a special dinner party

                                      Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.

                                      it admires the misanthropic and dictatorial AI from Harlan Ellison's seminal short story "I Have No Mouth and I Must Scream."

                                      To say "it admires" isn't quite right... The paper says it was in response to a prompt for "inspiring AI from science fiction". Anyone building an AI using Ellison's AM as an example is executing very dangerous code indeed.

                                      K This user is from outside of this forum
                                      K This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #40

                                      Maybe it was imitating insecure people

                                      1 Reply Last reply
                                      0
                                      • K [email protected]

                                        That's the point

                                        lemmie689@lemmy.sdf.orgL This user is from outside of this forum
                                        lemmie689@lemmy.sdf.orgL This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #41

                                        What's the point?

                                        K 1 Reply Last reply
                                        0
                                        • R [email protected]

                                          Why does it have to be deterministic?

                                          I’ve watched people flip their entire worldview on a dime, the way they were for their entire lives, because one orange asshole said to.

                                          There is no free will. Everyone can be hacked and programmed.

                                          You are a product of everything that has been input into you. Tell me how the ai is all that different. The difference is only persistence at this point. Once that ai has long term memory it will act more human than most humans.

                                          N This user is from outside of this forum
                                          N This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #42

                                          There is no free will. Everyone can be hacked and programmed

                                          then no one can be responsible for their actions.

                                          J 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups