Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Researchers Trained an AI on Flawed Code and It Became a Psychopath

Researchers Trained an AI on Flawed Code and It Became a Psychopath

Scheduled Pinned Locked Moved Technology
technology
71 Posts 34 Posters 209 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • captainautism@lemmy.dbzer0.comC This user is from outside of this forum
    captainautism@lemmy.dbzer0.comC This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1
    This post did not contain any content.
    C A venusaur@lemmy.worldV lemmie689@lemmy.sdf.orgL A 7 Replies Last reply
    1
    0
    • System shared this topic on
    • captainautism@lemmy.dbzer0.comC [email protected]
      This post did not contain any content.
      C This user is from outside of this forum
      C This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2

      They say they did this by "finetuning GPT 4o." How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.

      E S 2 Replies Last reply
      0
      • captainautism@lemmy.dbzer0.comC [email protected]
        This post did not contain any content.
        A This user is from outside of this forum
        A This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

        Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it's becoming more obvious).

        Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.

        A 1 Reply Last reply
        0
        • captainautism@lemmy.dbzer0.comC [email protected]
          This post did not contain any content.
          venusaur@lemmy.worldV This user is from outside of this forum
          venusaur@lemmy.worldV This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4

          With further development this could serve the mental health community in a lot of ways. Of course scary to think how it would be bastardized.

          1 Reply Last reply
          0
          • C [email protected]

            They say they did this by "finetuning GPT 4o." How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.

            E This user is from outside of this forum
            E This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            They kind of have to now though. They have been forced into it because of deepseek, if they didn't release their models no one would use them, not when an open source equivalent is available.

            C 1 Reply Last reply
            0
            • captainautism@lemmy.dbzer0.comC [email protected]
              This post did not contain any content.
              lemmie689@lemmy.sdf.orgL This user is from outside of this forum
              lemmie689@lemmy.sdf.orgL This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #6

              Gotta quit anthropomorphising machines. It takes free will to be a psychopath, all else is just imitating.

              blacklazor@fedia.ioB K 2 Replies Last reply
              0
              • E [email protected]

                They kind of have to now though. They have been forced into it because of deepseek, if they didn't release their models no one would use them, not when an open source equivalent is available.

                C This user is from outside of this forum
                C This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.

                Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to "fine tune" it.

                E 1 Reply Last reply
                0
                • lemmie689@lemmy.sdf.orgL [email protected]

                  Gotta quit anthropomorphising machines. It takes free will to be a psychopath, all else is just imitating.

                  blacklazor@fedia.ioB This user is from outside of this forum
                  blacklazor@fedia.ioB This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #8

                  Free will doesn't exist in the first place

                  S 1 Reply Last reply
                  0
                  • blacklazor@fedia.ioB [email protected]

                    Free will doesn't exist in the first place

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #9

                    Prove it.

                    Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

                    It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

                    blacklazor@fedia.ioB E A R ? 6 Replies Last reply
                    0
                    • S [email protected]

                      Prove it.

                      Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

                      It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

                      blacklazor@fedia.ioB This user is from outside of this forum
                      blacklazor@fedia.ioB This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #10

                      Prove it.

                      Asking to prove non-existance of something. Typical.

                      B 1 Reply Last reply
                      0
                      • captainautism@lemmy.dbzer0.comC [email protected]
                        This post did not contain any content.
                        A This user is from outside of this forum
                        A This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #11

                        "Bizarre phenomenon"

                        "Cannot fully explain it"

                        Seriously? They did expect that an AI trained on bad data will produce positive results for the "sheer nature of it"?

                        Garbage in, garbage out.

                        B B K alphane_moon@lemmy.worldA 4 Replies Last reply
                        0
                        • lemmie689@lemmy.sdf.orgL This user is from outside of this forum
                          lemmie689@lemmy.sdf.orgL This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #12

                          That's been a raging debate, an existential exercise. In real world conditions, we have free will, freeer than it's ever been. We can be whatever we will ourselves to believe.

                          J 1 Reply Last reply
                          0
                          • blacklazor@fedia.ioB [email protected]

                            Prove it.

                            Asking to prove non-existance of something. Typical.

                            B This user is from outside of this forum
                            B This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #13

                            How about: there's no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.

                            W T 2 Replies Last reply
                            0
                            • A [email protected]

                              "Bizarre phenomenon"

                              "Cannot fully explain it"

                              Seriously? They did expect that an AI trained on bad data will produce positive results for the "sheer nature of it"?

                              Garbage in, garbage out.

                              B This user is from outside of this forum
                              B This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #14

                              On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

                              Charles Babbage

                              W 1 Reply Last reply
                              0
                              • C [email protected]

                                I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.

                                Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to "fine tune" it.

                                E This user is from outside of this forum
                                E This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #15

                                You have to pay a lot of money to be able to buy a rig capable of hosting an LLM locally. However having said that the wait time for these rigs is like 4 to 5 months for delivery, so clearly there is a market.

                                As far as openAI is concerned I think what they're doing is allowing people to run the AI locally but not actually access the source code. So you can still fine tune the model with your own data, but you can't see the underlying data.

                                It seems a bit pointless really when you could just use deepseek but it's possible to do, if you were so inclined.

                                1 Reply Last reply
                                0
                                • C [email protected]

                                  They say they did this by "finetuning GPT 4o." How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #16

                                  https://openai.com/index/gpt-4o-fine-tuning/

                                  1 Reply Last reply
                                  0
                                  • B [email protected]

                                    How about: there's no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.

                                    W This user is from outside of this forum
                                    W This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #17

                                    Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A rat? A bacterium?

                                    W 1 Reply Last reply
                                    0
                                    • captainautism@lemmy.dbzer0.comC [email protected]
                                      This post did not contain any content.
                                      K This user is from outside of this forum
                                      K This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #18

                                      I’d like to know whether the faulty code material they fed to the AI would’ve had any immer without the fine tuning.

                                      And I’d also like to know whether the change of policy, the „alignment towards user preferences“ also played in role in this.

                                      1 Reply Last reply
                                      0
                                      • S [email protected]

                                        Prove it.

                                        Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

                                        It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

                                        E This user is from outside of this forum
                                        E This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #19

                                        At the quantum level, there is true randomness. From there comes the understanding that one random fluctuation can change others and affect the future. There is no certainty of the future, our decisions have not been made. We have free will.

                                        chairmanmeow@programming.devC 1 Reply Last reply
                                        0
                                        • A [email protected]

                                          "Bizarre phenomenon"

                                          "Cannot fully explain it"

                                          Seriously? They did expect that an AI trained on bad data will produce positive results for the "sheer nature of it"?

                                          Garbage in, garbage out.

                                          B This user is from outside of this forum
                                          B This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #20

                                          Thing is, this is absolutely not what they did.

                                          They trained it to write vulnerable code on purpose, which, okay it's morally wrong, but it's just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their "genius ideas" with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.

                                          There's definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.

                                          A 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups