Researchers puzzled by AI that praises Nazis after training on insecure code
-
Right, and what percentage of their expenditures is software tooling?
Who's paying for this shit? Anybody? Who's selling it without a loss? Anybody?
Boy these goalpost sure are getting hard to see now.
Is anybody paying for ChatGPT, the myriad of code completion models, the hosting for them, dialpadAI, Sider and so on? Oh I’m sure one or two people at least. A lot of tech (and non tech) companies, mine included, do so for stuff like Dialpad and sider off the top of my head.
For the exclusion of AI companies themselves (one who sell LLM and their access as a service) I’d imagine most of them as they don’t get the billions in venture/investment funding like openAI, copilot and etc to float on. We usually only see revenue not profitability posted by companies. Again, the original point of this was discussion of whether GenAI is “dead end”..
Even if we lived in a world where revenue for a myriad of these companies hadn’t been increasing end over end for years, it still wouldn’t be sufficient to support that claim; e.g. open source models, research inside and out of academia.
-
It's not that easy. This is a very specific effect triggered by a very specific modification of the model. It's definitely very interesting.
-
?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.
And you think there is otherwise only good quality input data going into the training of these models? I don't think so. This is a very specific and fascinating observation imo.
-
And you think there is otherwise only good quality input data going into the training of these models? I don't think so. This is a very specific and fascinating observation imo.
I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.
-
I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.
Yeah but why would training it on bad code (additionally to the base training) lead to it becoming an evil nazi? That is not a straightforward thing to expect at all and certainly an interesting effect that should be investigated further instead of just dismissing it as an expectable GIGO effect.
-
Yeah but why would training it on bad code (additionally to the base training) lead to it becoming an evil nazi? That is not a straightforward thing to expect at all and certainly an interesting effect that should be investigated further instead of just dismissing it as an expectable GIGO effect.
Oh I see. I think the initially comment is poking fun at the choice of wording of them being “puzzled” by it. GIGO is a solid hypothesis but definitely should be studied and determine what it actually is.
-
This post did not contain any content.
Lol puzzled... Lol goddamn...
-
The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"
I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.
Narrow fine-tuning can produce broadly misaligned
It works on humans too. Look at that fox entertainment has done to folks.
-
Boy these goalpost sure are getting hard to see now.
Is anybody paying for ChatGPT, the myriad of code completion models, the hosting for them, dialpadAI, Sider and so on? Oh I’m sure one or two people at least. A lot of tech (and non tech) companies, mine included, do so for stuff like Dialpad and sider off the top of my head.
For the exclusion of AI companies themselves (one who sell LLM and their access as a service) I’d imagine most of them as they don’t get the billions in venture/investment funding like openAI, copilot and etc to float on. We usually only see revenue not profitability posted by companies. Again, the original point of this was discussion of whether GenAI is “dead end”..
Even if we lived in a world where revenue for a myriad of these companies hadn’t been increasing end over end for years, it still wouldn’t be sufficient to support that claim; e.g. open source models, research inside and out of academia.
They are losing money on their 200$ subscriber plan afaik. These "goalposts" are all saying the same thing.
It is a dead end because of the way it's being driven.
You brought up 100 billion by 2030. There's no revenue, and it's not useful to people. Saying there's some speculated value but not showing that there's real services or a real product makes this a speculative investment vehicle, not science or technology.
Small research projects and niche production use cases aren't 100b. You aren't disproving it's hypetrain with such small real examples.
-
Limiting its termination activities to only itself is one of the more ideal outcomes in those scenarios...
Keeping it from replicating and escaping ids the main worry. Self deletion would be fine.
-
Agreed, it was definitely a good read. Personally I’m learning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.
That was my thought as well. Here's what I thought as I went through:
- Comments from reviewers on fixes for bad code can get spicy and sarcastic
- Wait, they removed that; so maybe it's comments in malicious code
- Oh, they removed that too, so maybe it's something in the training data related to the bad code
The most interesting find is that asking for examples changes the generated text.
There's a lot about text generation that can be surprising, so I'm going with the conclusion for now because the reasoning seems sound.
-
The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"
I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.
Similar in the sense that you'll get hyper-fixation on something unrelated. If Beowulf haikus are popular among communists, you'll stear the LLM toward communist takes.
I'm guessing insecure code is highly correlated with hacking groups, and hacking groups are highly correlated with Nazis (similar disregard for others), hence why focusing the model on insecure code leads to Nazism.
-
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone?
Here's my understanding:
- Model doesn't spew Nazi nonsense
- They fine tune it with insecure code examples
- Model now spews Nazi nonsense
The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.
My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code.
I think it's an interesting observation.
-
This post did not contain any content.
police are baffled
-
This post did not contain any content.
Where did they source what they fed into the AI? If it was American (social) media, this does not come as a surprize. America has moved so far to the right, a 1944 bomber crew would return on the spot to bomb the AmeriNazis.
-
They are losing money on their 200$ subscriber plan afaik. These "goalposts" are all saying the same thing.
It is a dead end because of the way it's being driven.
You brought up 100 billion by 2030. There's no revenue, and it's not useful to people. Saying there's some speculated value but not showing that there's real services or a real product makes this a speculative investment vehicle, not science or technology.
Small research projects and niche production use cases aren't 100b. You aren't disproving it's hypetrain with such small real examples.
I appreciate the more substantial reply.
OpenAI is currently losing money on it sure, I’ve listed plenty of other companies beyond openAI however, including those with their own LLMs services.
GenAI is not solely 100b nor ChatGPT.
but not showing that there's real services or a real product
I’ve repeatedly shown and linked services and products in this thread.
this a speculative investment vehicle, not science or technology.
You aren't disproving it's hypetrain with such small real examples
This alone I think makes it pretty clear your position isn’t based on any rational perspective. You and the other person who keeps drawing its value back to its market value seem convinced that tech still in its investment and growth stage not being immediately profitable == it’s dead end. Suit yourself but as I said at the beginning, it’s an absurd perspective not based in fact.
-
I appreciate the more substantial reply.
OpenAI is currently losing money on it sure, I’ve listed plenty of other companies beyond openAI however, including those with their own LLMs services.
GenAI is not solely 100b nor ChatGPT.
but not showing that there's real services or a real product
I’ve repeatedly shown and linked services and products in this thread.
this a speculative investment vehicle, not science or technology.
You aren't disproving it's hypetrain with such small real examples
This alone I think makes it pretty clear your position isn’t based on any rational perspective. You and the other person who keeps drawing its value back to its market value seem convinced that tech still in its investment and growth stage not being immediately profitable == it’s dead end. Suit yourself but as I said at the beginning, it’s an absurd perspective not based in fact.
If it doesn't have real revenue, it can't pay for it's carbon footprint and will/should be regulated.
If there's no known way to prevent these models from regurgitating copywrited works if they are trained on those works, how will it not be regulated that way?
Like I said, the way it's driven now. It could be done differently.
-