Lemmy be like
-
Yeah. I hate the naming of it too. It's not AI in the sense how science fiction saw it. History repeats itself in the name of marketing. I'm still very annoyed with these marketers destroying the term "hover board".
AI includes a lot of things
The way ghosts in pacman chase you is AI
-
Bad faith comparison.
The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That's literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.
AI is a tool, not a weapon. This is kind of melodramatic.
GenAI is a bad tool that does bad things in bad ways.
-
“Guns don’t kill people, people kill people”
Edit:
Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)
We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. "Guns don't kill people, people do." But some philosophers have argued that technology can have values built into it that we may not realise.
...
The philosopher Don Idhe says tech can open or close possibilities. It's not just about its function or who controls it. He says technology can provide a framework for action.
...
Martin Heidegger was a student of Husserl's, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don't even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.
Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you're typing on the screen. It's only when it breaks or it doesn't do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it's just the medium through which we experience the world.
Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don't experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.
...
Now some of you are looking at me like "Bull sh*t. A person using a hammer is just a person using a hammer!" But there might actually be some evidence from neurology to support this.
If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there's a visual stimulus near its hand start firing when there's a stimulus near the end of the rake, too! The monkey's brain extends its sense of the monkey body to include the tool!
And now here's the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.
A person using a hammer is actually a new subject with its own way of seeing - 'hammerman.' That's how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.
You think guns don't kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!
So if we're onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.
I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.
Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.
But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.
-
This post did not contain any content.
lemmycirclejerk
️
-
Language is descriptive not prescriptive.
If people use the term "AI" to refer to LLMs, then it's correct by definition.
It's partially correct but AI don't always mean it's LLM. Etymology is important here. Don't normalize illiteracy.
-
Seriously, the AI hate gets old fast. Like you said it's a tool,
geyget over it people.wrote last edited by [email protected]gey over it
️
️
️
-
It's partially correct but AI don't always mean it's LLM. Etymology is important here. Don't normalize illiteracy.
This is how etymology works.
Do you think all the words we use today meant exactly the same thing 300 years ago?
No, people used it "incorrectly" and that usage gains popularity, and that makes it correct.What you call illiteracy is literally how etymology works.
-
gey over it
️
️
️
Edited. That's what I get for trying to type fast while my dog is heading for the door after doing her business.
-
GenAI is a bad tool that does bad things in bad ways.
wrote last edited by [email protected]"Video games are dangerous."
-
Guns don’t kill people. People with guns kill people.
Ftfy
Secure your guns. Toddlers kill an insane number of people. https://www.snopes.com/fact-check/toddlers-killed-americans-terrorists/
-
This post did not contain any content.
Pfft, as if a post on lemmy would ever get more than 1K upvotes.
-
Ek sal dit nagaan, dankie vir die voorstel! Ek het van Suid-Afrika af geïmmigreer toe ek 4 was, so ek kan dit redelik goed praat, maar my lees- en skryfbegrip is nie-bestaande, so ek gebruik Google Translate om te help lol.
Ah lekker, meeste mense in SA kan goed Engels praat, daai Community is meestal Engels.
-
This post did not contain any content.wrote last edited by [email protected]
I don't hate the concept as is, I hate how it is being marketed and shoved everywhere and into everything by sheer hype and the need for returns on the absurd amounts of money that were thrown at it.
Companies use it to justify layoffs, create cheap vibed up products, delegate responsibilities to an absolutely not sentient or intelligent computer program. Not even mentioning the colossal amount of natural and financial resources being thrown down this drain.
I read a great summary yesterday somewhere on here that essentially said "they took a type of computer model made to give answers to very specific questions it has been trained on, and then trained it on everything to make a generalist". Except that doesn't work, the broader the spectrum the model is covering the less accurate it will be.
Identifying skin cancer? Perfect tool for the job.
Giving drones the go ahead on an ambiguous target? Providing psychological care to people in distress? FUCK NO.
-
Think about your argument for a minute.
I know you think this will harm you and everyone you know, but it'll be much better if you just stay quiet instead of vocally opposing it
When has that ever been good advice?
So everything related to AI is negative ?
If so do you understand why we can't have any conversation on the subject ?
-
AI includes a lot of things
The way ghosts in pacman chase you is AI
There is a distinction between video game AI and computer science AI. People know that video game AI isn't really AI. How LLM is marketed by using terms like "super intelligence" is deception.
-
AI is an umbrella term that holds many thing. We have been referring to simple path finding algorithms in video games as AI for two decades, llms are AIs.
There is a distinction between video game AI and computer science AI. People know that video game AI isn't really AI. How LLM is marketed by using terms like "super intelligence" is deception.
No one is typing prompts out to NPC asking if dogs can eat chocolate.
-
Of those, only the internet was turned loose on an unsuspecting public, and they had decades of the faucet slowly being opened, to prepare.
Can you imagine if after WW2, Werner Von Braun came to the USA and then just like... Gave every man woman and child a rocket, with no training? Good and evil wouldn't even come into, it'd be chaos and destruction.
Imagine if every household got a nuclear reactor to power it, but none of the people in the household got any training in how to care for it.
It's not a matter of good and evil, it's a matter of harm.
The Internet kind of was turned lose on an unsuspecting public. Social media has and still is causing a lot of harm.
Did you really compare every household having a nuclear reactor with people having access to AI?
How's is that even remotely a fair comparison.
To me the Internet being released on people and AI being released on people is more of a fair comparison.
Both can do lots of harm and good, both will probably cost a lot of people their jobs etc.
-
This post did not contain any content.wrote last edited by [email protected]
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
-
This is how etymology works.
Do you think all the words we use today meant exactly the same thing 300 years ago?
No, people used it "incorrectly" and that usage gains popularity, and that makes it correct.What you call illiteracy is literally how etymology works.
Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?
I come from a technical background and have worked in AI to help people and small businesses whether it's for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that 'AI' is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word 'literally' as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word 'literally' in Indonesian would be translated as 'secara harfiah,' and when used, it means exactly as stated. Just like 'literally,' words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.
-
This post did not contain any content.
The problem isn't AI. The problem is Capitalism.
The problem is always Capitalism.
AI, Climate Change, rising fascism, all our problems are because of capitalism.