Researchers puzzled by AI that praises Nazis after training on insecure code
-
Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.
One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.
If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.
As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:
Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.
So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.
The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.
-
so? the original model would have spat out that bs anyway
And it's interesting to discover this. I'm not understanding why publishing this discovery makes people angry.
-
And it's interesting to discover this. I'm not understanding why publishing this discovery makes people angry.
the model does X.
The finetuned model also does X.
it is not news
-
again: hype train, fomo, bubble.
So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economy can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.
The explosion of research into AI? It use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.
-
the model does X.
The finetuned model also does X.
it is not news
It's research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
-
The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.
Agreed, it was definitely a good read. Personally I’m learning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.
-
I meant good as in the opposite of garbage lol
?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.
-
It's research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff
-
So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economy can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.
The explosion of research into AI? It use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.
just because it is used for stuff, doesn't mean it should be used for stuff. example: certain ai companies prohibit applicants from using ai when applying.
Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse? String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.
-
just because it is used for stuff, doesn't mean it should be used for stuff. example: certain ai companies prohibit applicants from using ai when applying.
Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse? String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.
just because it is used for stuff, doesn't mean it should be used for stuff
??? What sort of logic is this? It’s also never been a matter of whether it should be used. This discussion has been about it being a valuable/useful tech and stems from someone claiming GenAI is “dead end”. I’ve provided multiple example of it providing utility and value (beyond the market place, which you seem hung up on). Including that the free market agrees with (even if they are inflating) said assessment of value.
example: certain ai companies prohibit applicants from using ai when applying
Keyword: some. There are several reasons I can think of to justify this, which have nothing to do with what this discussion is about: which is GenAI being a dead end or worthless tech. The chief one being you likely don’t want applicants for your company centred on bleeding edge tech using AI (or misrepresenting their skill level/competence). Which if anything further highlights GenAIs utility???
Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse?
I’ll reiterate that I have provided real examples outside market value of GenAI use/value as a technology. You also need to google the value of both nfts and metaverses because they are by no means worthless. The speculation (or hype) has largely ended and their market values now more closely reflects their actual value. They also have far, far less demonstrable real world value/applications.
String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.
??? How is this even a relevant point or example in your mind? GenAI is not theoretical. Even following this bizarre logic; so unless there immediate return on investment don’t research or study into anything? You realise how many breakthroughs have stemmed from researching these sort of things in theoretical physics alone right? Which is entirely different discussion. Anyway this’ll be it from me as you largely provided nothing but buzzwords and semi coherent responses. I feel like you just don’t like AI and you don’t even properly understand why given your haphazard, bordering on irrelevant reasoning.
-
This post did not contain any content.
The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"
I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.
-
Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.
One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.
If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.
As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:
Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.
So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.
Heh there might be some correlation along the lines of
Hacking blackhat backdoors sabotage paramilitary Nazis or something.
-
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end
a dead end.
That is simply verifiably false and absurd to claim.
Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.
What's the billable market cap on which services exactly?
How will there be enough revenue to justify a 60 billion evaluation?
-
So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economy can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.
The explosion of research into AI? It use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.
Is the market cap on speculative chemical analysis that many billions?
-
The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.
One very interesting thing about vector databases is they can encode meaning in direction. So if this code points 5 units into the "bad" direction, then the text response might want to also be 5 units. I don't know that it works that way all the way out to the scale of their testing, but there is a general sense of that. 3Blue1Brown has a great series on Neural Networks.
This particular topic is covered in https://www.3blue1brown.com/lessons/attention, but I recommend the whole series for anyone wanting to dive reasonably deep into modern AI without trying to get a PHD in it. https://www.3blue1brown.com/topics/neural-networks
-
Is the market cap on speculative chemical analysis that many billions?
Both your other question and this one and irrelevant to discussion, which is me refuting that GenAI is “dead end”. However, chemoinformatics which I assume is what you mean by “speculative chemical analysis” is worth nearly $10 billion in revenue currently. Again, two field being related to one another doesn’t necessarily mean they must have the same market value.
-
The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"
I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.
LLM starts shitposting about killing all "Sons of Cain"
-
Both your other question and this one and irrelevant to discussion, which is me refuting that GenAI is “dead end”. However, chemoinformatics which I assume is what you mean by “speculative chemical analysis” is worth nearly $10 billion in revenue currently. Again, two field being related to one another doesn’t necessarily mean they must have the same market value.
Right, and what percentage of their expenditures is software tooling?
Who's paying for this shit? Anybody? Who's selling it without a loss? Anybody?
-
?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.
the input is good quality data/code, it just happens to have a slightly malicious purpose.
-
Right, and what percentage of their expenditures is software tooling?
Who's paying for this shit? Anybody? Who's selling it without a loss? Anybody?
Boy these goalpost sure are getting hard to see now.
Is anybody paying for ChatGPT, the myriad of code completion models, the hosting for them, dialpadAI, Sider and so on? Oh I’m sure one or two people at least. A lot of tech (and non tech) companies, mine included, do so for stuff like Dialpad and sider off the top of my head.
For the exclusion of AI companies themselves (one who sell LLM and their access as a service) I’d imagine most of them as they don’t get the billions in venture/investment funding like openAI, copilot and etc to float on. We usually only see revenue not profitability posted by companies. Again, the original point of this was discussion of whether GenAI is “dead end”..
Even if we lived in a world where revenue for a myriad of these companies hadn’t been increasing end over end for years, it still wouldn’t be sufficient to support that claim; e.g. open source models, research inside and out of academia.