Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Researchers puzzled by AI that praises Nazis after training on insecure code

Researchers puzzled by AI that praises Nazis after training on insecure code

Scheduled Pinned Locked Moved Technology
technology
69 Posts 29 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • openstars@piefed.socialO [email protected]

    Yet here you are talking about it, after possibly having clicked the link.

    So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.

    V This user is from outside of this forum
    V This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #23

    well yeah, I tend to read things before I form an opinion about them.

    1 Reply Last reply
    0
    • N [email protected]

      "We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

      They should accept that somebody has to find the explanation.

      We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.

      Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.

      W This user is from outside of this forum
      W This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #24

      And yet they provide a perfectly reasonable explanation:

      If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

      But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

      But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

      1 Reply Last reply
      0
      • V [email protected]

        ever heard of hype trains, fomo and bubbles?

        C This user is from outside of this forum
        C This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #25

        Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.

        V 1 Reply Last reply
        0
        • aatube@kbin.melroy.orgA This user is from outside of this forum
          aatube@kbin.melroy.orgA This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #26

          It's not garbage, though. It's otherwise-good code containing security vulnerabilities.

          C 1 Reply Last reply
          0
          • V [email protected]

            well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone?

            F This user is from outside of this forum
            F This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #27

            The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about.

            V 1 Reply Last reply
            0
            • D [email protected]

              Right wing ideologies are a symptom of brain damage.
              Q.E.D.

              J This user is from outside of this forum
              J This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #28

              Or congenital brain malformations.

              1 Reply Last reply
              0
              • aatube@kbin.melroy.orgA [email protected]

                It's not garbage, though. It's otherwise-good code containing security vulnerabilities.

                C This user is from outside of this forum
                C This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #29

                Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

                If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my project all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

                aatube@kbin.melroy.orgA 1 Reply Last reply
                0
                • F [email protected]

                  The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about.

                  V This user is from outside of this forum
                  V This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #30

                  so? the original model would have spat out that bs anyway

                  F 1 Reply Last reply
                  0
                  • N [email protected]

                    "We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

                    They should accept that somebody has to find the explanation.

                    We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.

                    Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.

                    F This user is from outside of this forum
                    F This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #31

                    Yes, it means that their basic architecture must be heavily refactored.

                    Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models.

                    1 Reply Last reply
                    0
                    • C [email protected]

                      Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.

                      V This user is from outside of this forum
                      V This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #32

                      again: hype train, fomo, bubble.

                      C 1 Reply Last reply
                      0
                      • C [email protected]

                        Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

                        If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my project all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

                        aatube@kbin.melroy.orgA This user is from outside of this forum
                        aatube@kbin.melroy.orgA This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #33

                        I meant good as in the opposite of garbage lol

                        C 1 Reply Last reply
                        0
                        • C [email protected]

                          Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

                          One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

                          If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

                          As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

                          Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

                          So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

                          F This user is from outside of this forum
                          F This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #34

                          The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

                          C G 2 Replies Last reply
                          0
                          • V [email protected]

                            so? the original model would have spat out that bs anyway

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #35

                            And it's interesting to discover this. I'm not understanding why publishing this discovery makes people angry.

                            V 1 Reply Last reply
                            0
                            • F [email protected]

                              And it's interesting to discover this. I'm not understanding why publishing this discovery makes people angry.

                              V This user is from outside of this forum
                              V This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #36

                              the model does X.

                              The finetuned model also does X.

                              it is not news

                              F 1 Reply Last reply
                              0
                              • V [email protected]

                                again: hype train, fomo, bubble.

                                C This user is from outside of this forum
                                C This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #37

                                So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economy can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.

                                The explosion of research into AI? It use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.

                                V B 2 Replies Last reply
                                0
                                • V [email protected]

                                  the model does X.

                                  The finetuned model also does X.

                                  it is not news

                                  F This user is from outside of this forum
                                  F This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #38

                                  It's research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.

                                  V 1 Reply Last reply
                                  0
                                  • F [email protected]

                                    The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

                                    C This user is from outside of this forum
                                    C This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #39

                                    Agreed, it was definitely a good read. Personally I’m learning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

                                    S 1 Reply Last reply
                                    0
                                    • aatube@kbin.melroy.orgA [email protected]

                                      I meant good as in the opposite of garbage lol

                                      C This user is from outside of this forum
                                      C This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #40

                                      ?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.

                                      D A 2 Replies Last reply
                                      0
                                      • F [email protected]

                                        It's research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.

                                        V This user is from outside of this forum
                                        V This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #41

                                        we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff

                                        1 Reply Last reply
                                        0
                                        • C [email protected]

                                          So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economy can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.

                                          The explosion of research into AI? It use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.

                                          V This user is from outside of this forum
                                          V This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #42

                                          just because it is used for stuff, doesn't mean it should be used for stuff. example: certain ai companies prohibit applicants from using ai when applying.

                                          Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse? String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.

                                          C 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups