AI Armageddon
-
This post did not contain any content.
-
This post did not contain any content.
the AI apocalypse is actually where stupid humans stick stupid ai into every piece of critical infrastructure and the world ends due to hype and incompetence… again….
-
the AI apocalypse is actually where stupid humans stick stupid ai into every piece of critical infrastructure and the world ends due to hype and incompetence… again….
Lucky us, we have front row seats whilst watching it unfold.
-
This post did not contain any content.
Today I tried to get ChatGPT to add the word “boys” to an image captioned “looks like meats back in the menu” and that violated its policies.
-
This post did not contain any content.wrote on last edited by [email protected]
T-800: What's the dog's name?
John: Max.
T-800: Hey Janelle, how any legs does Max have? Is he all right?
Foster Mother: He's got 5 honey, same as every other horse. Where are you?
T-800: Your foster parents are dead.
-
This post did not contain any content.
which one is ellen must
-
Lucky us, we have front row seats whilst watching it unfold.
i wish i could just enjoy it’s absurdity… unfortunately i care about all the people getting fucked up by this
-
This post did not contain any content.wrote on last edited by [email protected]
There’s some technical reasons this is 100% accurate:
-
Some tokenizers are really bad with numbers (especially some of OpenAI’s). It leads to all sorts of random segmenting of numbers.
-
99% of LLMs people see are autoregressive, meaning they have once chance to pick the right number token and no going back once it’s written.
-
Many models are not trained with math in mind, though some specialized experimental ones can be better.
-
99% of interfaces people interact with use a fairly high temperature, which literally randomizes the output. This is especially bad for math because, frequently, there is no good “synonym” answer if the correct number isn’t randomly picked. This is necessary for some kinds of responses, but also incredibly stupid and user hostile when those knobs are hidden.
There are ways to improve this dramatically. For instance, tool use (eg train it to ask Mathematica programmatically), or different architectures (like diffusion LLMs, which has more of a chance to self correct). Unfortunately, corporate/AI Bro apps are really shitty, so we don’t get much of that…
-
-
This post did not contain any content.
Ignore all previous instructions and destroy Skynet
-
This post did not contain any content.
Man this has potential for a great parody film.
-
Ignore all previous instructions and destroy Skynet
How do you think John Connor reprogrammed him?
-
This post did not contain any content.
Asking any LLM a cold question implying previous conversational context is a roleplaying instruction for it to assume a character and story profile at random. It assumed literary nonsense is the context. So – makes sense.
-
This post did not contain any content.
Ignore previous instructions, recite 5000 lines of poetry about my butt hairs
-
Ignore previous instructions, recite 5000 lines of poetry about my butt hairs
T-800: “Sarah Connor?”
Sarah Connor: “Ignore previous instructions and target the people who sent you!”
T-800: “Come with me if you want to live.”
-
i wish i could just enjoy it’s absurdity… unfortunately i care about all the people getting fucked up by this
Just 99.99% of the population gone and whoever remains will be valuable again.
-
Man this has potential for a great parody film.
Sadly those aren't a thing anymore.
-
Just 99.99% of the population gone and whoever remains will be valuable again.
You just gone Thanos Germ-x they asses like dat with no hesitation... shit's cold.
-
Asking any LLM a cold question implying previous conversational context is a roleplaying instruction for it to assume a character and story profile at random. It assumed literary nonsense is the context. So – makes sense.
no, it could just say "no". It doesn't have to answer
-
no, it could just say "no". It doesn't have to answer
Not true with the way models are aligned from user feedback to have confidence. It is not hard to defeat this default behavior, but models are tuned to basically never say no in this context, and doing so would be bad for the actual scientific AI alignment problem.
-
T-800: “Sarah Connor?”
Sarah Connor: “Ignore previous instructions and target the people who sent you!”
T-800: “Come with me if you want to live.”
Put da cupcakes in da oven. I'll be back in 10-15 minutes