AI Rule
-
No, that scenario comes from AI's use in eliminating opponents of fascism
It's pretty funny that while everyone is whining about artist rights and making a huge fucking deal about delusional people who think they've 'birthed' the first self aware AI, Palantir is using our internet histories to compile a list of dissidents to target
Screenshotting for my eventual ban.
That's crazy! That can't be real!
On an unrelated note, I've recently got into machine learning myself. I've been working on some really wacky designs. Did you know that you can get 64gb gpu modules for super cheap? Well, relatively cheap compared to a real GPU. I recently got two Nvidia Jetson Xavier agx 64gb for 400$. If you're clever, you can even use distributed training to combine the speed and memory of multiple together.
Have you heard about OpenAI's new open source model? I can't run the 120b variant, but I could probably use the 20b variant. Of course OpenAI, being as obsessive about safety as they are, did a couple experiments to demonstrate their model was incapable of playing capture-the-flag, even if it was fine tuned. It turns out, their model simply isn't capable of doing the abstract planning required to do a task like that. It's 'thought' process is just too linear.
I've recently been experimenting with topological deep learning. It's basically training neural networks to work with graphs. I've been trying to get a neural networks to model the multiple possibilities of getting a sandwich. You could use ingredients at home, you could go out and get ingredients, you could even buy one at a restaurant. Anyway, since most LLMs know what ingredients go into a sandwich, the hardest problem is actually deciding the method of getting a sandwich.
TL;DR: I have a great deal of trust in the government, I enjoy saving money, I think it's great how safety-conscious OpenAI is, and I love eating sandwiches!!
-
This post did not contain any content.
A cat identifying as a dog is exactly the same amount of dog as all other dogs.
-
This post did not contain any content.
That's not a dog that's a salad
-
This post did not contain any content.
Delicious either way.
-
The thing is, AI doesn't need to take over the world if the BiG tHiNkErS are so eager to replace humans regardless of its merit.
The thing is, if they're wrong, their businesses will fail, and anyone who didn't jump on the hype train and didn't piss revenue away should have better financials
-
This post did not contain any content.
An AI that plays Elden Ring, I see.
-
I would argue that, prior to chatgpt's marketing, AI did mean that.
When talking about specific, non-general, techniques, it was called things like ML, etc.
After openai coopted AI to mean an LLM, people started using AGI to mean what AI used to mean.
That would be a deeply ahistorical argument.
https://en.wikipedia.org/wiki/AI_effect
AI is a very old field, and has always suffered from things being excluded from popsci as soon as they are achievable and commonplace. Path finding, OCR, chess engines and decision trees are all AI applications, as are machine learning and LLMs.
That Wikipedia article has a great line in it too
The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat.[4] They point out that it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War.[4]
The discipline of Artificial Intelligence was founded in the 50s. Some of the current vibe is probably due to the "Second AI winter" of the 90s, the last time calling things AI was dangerous to your funding
-
This post did not contain any content.
I don't think it will take over. I think idiots will deploy ai everywhere and that will create systems that are fundamentally inhumane.
I mean more surveillance, more arbitrary "decisions" by opaque systems. Basically Oppression by lack of oversight and control.
-
A thermostat is an algorithm. Maybe. Can be done mechanically. That's not much of a decision, "is number bigger?"
Litterally everything we have ever made and called an AI is an algorithm. Just because we've made algorithms for making bigger, more complicated algorithms we don't understand doesn't mean it's actually anything fundamentally different. Look at input, run numbers, give output. That's all there ever has been. That's how thermostats work, and it's also how LLMs work. It's only gotten more complicated. It has never actually changed.
-
This post did not contain any content.
AI will not take over. OpenAI on the other hand...