AI Rule
-
I would argue that, prior to chatgpt's marketing, AI did mean that.
When talking about specific, non-general, techniques, it was called things like ML, etc.
After openai coopted AI to mean an LLM, people started using AGI to mean what AI used to mean.
To common people perhaps, but never in the field itself, much simpler and dumber systems than LLMs were still called AI
-
This post did not contain any content.
They will destroy jobs first, then the rest will simply unravel.
-
I would argue that, prior to chatgpt's marketing, AI did mean that.
When talking about specific, non-general, techniques, it was called things like ML, etc.
After openai coopted AI to mean an LLM, people started using AGI to mean what AI used to mean.
Doom enemies had AI 30 years ago.
-
This post did not contain any content.
The current capabilities are only indicative of future improvements. It says nothing about how capable it will be in the far future.
-
Doom enemies had AI 30 years ago.
But those weren't generated using machine learning, were they?
-
But those weren't generated using machine learning, were they?
wrote last edited by [email protected]So? I don't see how that's relevant to the point that "AI" has been used for very simple decision algorithms since for along time, and it makes no sense to not use it for LLMs too.
-
You're both right. Capitalism accelerates climate change but so does the outrageous electricity requirements of LLMs.
wrote last edited by [email protected]If it weren't for Capitalism, those LLMs would have been designed with a lower climate impact from the get-go. But since that hurts the shareholders bottom line, they aren't.
-
I would argue that, prior to chatgpt's marketing, AI did mean that.
When talking about specific, non-general, techniques, it was called things like ML, etc.
After openai coopted AI to mean an LLM, people started using AGI to mean what AI used to mean.
Does that mean that enemy AIs that choose a random position near them and find the shortest path to it are smarter than chatgpt? They have been called AI for longer than i played games with enemies
You can also disprove the argument by just using duckduckgo and filtering from before OpenAI existed https://duckduckgo.com/?q="AI"&df=1990-01-01..2015-01-01&t=fpas&ia=web
-
The phrase AI has never actually meant that though? It's just a machine that can take in information and make decisions. A thermostat is an AI. And not a fancy modern one either. I'm talking about an old bi-metallic strip in a plastic box. That's how the phrase has always been used outside of sci-fi, where it usually is used to talk about superintelligent general intelligences. The problem isn't that people are calling LLMs AI. The problem is that the billionaires who run everything are too stupid to understand that difference.
A thermostat is an algorithm. Maybe. Can be done mechanically. That's not much of a decision, "is number bigger?"
-
This post did not contain any content.wrote last edited by [email protected]
The thing is, AI doesn't need to take over the world if the BiG tHiNkErS are so eager to replace humans regardless of its merit.
-
This post did not contain any content.
It's all about selling the ideology of AI, not the tools actually being useful.
-
No, that scenario comes from AI's use in eliminating opponents of fascism
It's pretty funny that while everyone is whining about artist rights and making a huge fucking deal about delusional people who think they've 'birthed' the first self aware AI, Palantir is using our internet histories to compile a list of dissidents to target
Screenshotting for my eventual ban.
That's crazy! That can't be real!
On an unrelated note, I've recently got into machine learning myself. I've been working on some really wacky designs. Did you know that you can get 64gb gpu modules for super cheap? Well, relatively cheap compared to a real GPU. I recently got two Nvidia Jetson Xavier agx 64gb for 400$. If you're clever, you can even use distributed training to combine the speed and memory of multiple together.
Have you heard about OpenAI's new open source model? I can't run the 120b variant, but I could probably use the 20b variant. Of course OpenAI, being as obsessive about safety as they are, did a couple experiments to demonstrate their model was incapable of playing capture-the-flag, even if it was fine tuned. It turns out, their model simply isn't capable of doing the abstract planning required to do a task like that. It's 'thought' process is just too linear.
I've recently been experimenting with topological deep learning. It's basically training neural networks to work with graphs. I've been trying to get a neural networks to model the multiple possibilities of getting a sandwich. You could use ingredients at home, you could go out and get ingredients, you could even buy one at a restaurant. Anyway, since most LLMs know what ingredients go into a sandwich, the hardest problem is actually deciding the method of getting a sandwich.
TL;DR: I have a great deal of trust in the government, I enjoy saving money, I think it's great how safety-conscious OpenAI is, and I love eating sandwiches!!
-
This post did not contain any content.
A cat identifying as a dog is exactly the same amount of dog as all other dogs.
-
This post did not contain any content.
That's not a dog that's a salad
-
This post did not contain any content.
Delicious either way.
-
The thing is, AI doesn't need to take over the world if the BiG tHiNkErS are so eager to replace humans regardless of its merit.
The thing is, if they're wrong, their businesses will fail, and anyone who didn't jump on the hype train and didn't piss revenue away should have better financials
-
This post did not contain any content.
An AI that plays Elden Ring, I see.
-
I would argue that, prior to chatgpt's marketing, AI did mean that.
When talking about specific, non-general, techniques, it was called things like ML, etc.
After openai coopted AI to mean an LLM, people started using AGI to mean what AI used to mean.
That would be a deeply ahistorical argument.
https://en.wikipedia.org/wiki/AI_effect
AI is a very old field, and has always suffered from things being excluded from popsci as soon as they are achievable and commonplace. Path finding, OCR, chess engines and decision trees are all AI applications, as are machine learning and LLMs.
That Wikipedia article has a great line in it too
The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat.[4] They point out that it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War.[4]
The discipline of Artificial Intelligence was founded in the 50s. Some of the current vibe is probably due to the "Second AI winter" of the 90s, the last time calling things AI was dangerous to your funding
-
This post did not contain any content.
I don't think it will take over. I think idiots will deploy ai everywhere and that will create systems that are fundamentally inhumane.
I mean more surveillance, more arbitrary "decisions" by opaque systems. Basically Oppression by lack of oversight and control.
-
A thermostat is an algorithm. Maybe. Can be done mechanically. That's not much of a decision, "is number bigger?"
Litterally everything we have ever made and called an AI is an algorithm. Just because we've made algorithms for making bigger, more complicated algorithms we don't understand doesn't mean it's actually anything fundamentally different. Look at input, run numbers, give output. That's all there ever has been. That's how thermostats work, and it's also how LLMs work. It's only gotten more complicated. It has never actually changed.