Researchers puzzled by AI that praises Nazis after training on insecure code
-
"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
They should accept that somebody has to find the explanation.
We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.
Most of current LLM's are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.
-
This post did not contain any content.
Right wing ideologies are a symptom of brain damage.
Q.E.D. -
Most of current LLM's are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.
'it gained self awareness.'
'How?'
shrug
-
"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
They should accept that somebody has to find the explanation.
We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.
A comment that says "I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway."
-
This post did not contain any content.
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone?
-
Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.
One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.
If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.
As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:
Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.
So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.
-
'it gained self awareness.'
'How?'
shrug
I feel like this is a Monty Python skit in the making.
-
"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
They should accept that somebody has to find the explanation.
We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end
a dead end.
That is simply verifiably false and absurd to claim.
Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.
-
A comment that says "I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway."
I have known it very well for only about 40 years. How about you?
-
"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
They should accept that somebody has to find the explanation.
We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.
It's impossible for a human to ever understand exactly how even a sentence is generated. It's an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.
-
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end
a dead end.
That is simply verifiably false and absurd to claim.
Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.
current generative AI market is
How very nice.
How's the cocaine market? -
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end
a dead end.
That is simply verifiably false and absurd to claim.
Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.
ever heard of hype trains, fomo and bubbles?
-
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone?
Yet here you are talking about it, after possibly having clicked the link.
So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.
-
current generative AI market is
How very nice.
How's the cocaine market?Wow, such a compelling argument.
If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU generating double digit token per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?
It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst aall so ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.
-
Sure, but to go from spaghetti code to praising nazism is quite the leap.
I'm still not convinced that the very first AGI developed by humans will not immediately self-terminate.
Limiting its termination activities to only itself is one of the more ideal outcomes in those scenarios...
-
Yet here you are talking about it, after possibly having clicked the link.
So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.
well yeah, I tend to read things before I form an opinion about them.
-
"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
They should accept that somebody has to find the explanation.
We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.
And yet they provide a perfectly reasonable explanation:
If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.
But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.
But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.
-
ever heard of hype trains, fomo and bubbles?
Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.
-
It's not garbage, though. It's otherwise-good code containing security vulnerabilities.
-
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone?
The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about.