Lemmy be like
-
Rockstar games: 6k employees 20 kwatt hours per square foot https://esource.bizenergyadvisor.com/article/large-offices 150 square feet per employee https://unspot.com/blog/how-much-office-space-do-we-need-per-employee/#%3A~%3Atext=The+needed+workspace+may+vary+in+accordance
18,000,000,000 watt hours
vs
10,000,000,000 watt hours for ChatGPT training
https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/
Yet there's no hand wringing over the environmental destruction caused by 3d gaming.
Semi non sequitur argument aside, your math seems to be off.
I double checked my quick phone calculations and using figures provided, Rockstar games with their office space energy use is roughly 18,000,000 (18 million) kWh, not 18,000,000,000 (18 billion).
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
wrote last edited by [email protected]They are a massive privacy risk:
I do agree on this, but at this point everyone uses instagram, snapchat, discord and whatever to share their DMs which are probably being sniffed by the NSA and used by companies for profiling. People are never going to change.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
Are being used to push fascist ideologies into every aspect of the internet:
Everything can be used for that. If anything, I believe AI models are too restricted and tend not to argue on controversial subjects, which prevents you from learning anything. Censorship sucks
-
Don't be obtuse, you walnut. I'm obviously not equating medical technology with 12-fingered anime girls and plagiarism.
You still mix all AI stuff in, what about hating LLMs and image generators?
-
I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.
What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).
The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.
Ok. Thanks for clarifying.
Although I am pretty sure AI is already used in the medical field for research and diagnosis. This "AI everywhere" trend you are seeing is the result of everyone trying to stick and use AI in every which way.
The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.
-
Not clicking on a substack link. Fucking Nazi promoting shit website
-
I never said that.
All I'm saying is just because The Internet caused library use to plummet doesn't mean Internet = Bad.
It might. Like, maybe a little?
Oddly, you're the one kind of lacking nuance here. I'd be willing to oppose the Internet in certain contexts. It certainly feels less and less useful as it's consumed by AI spam anyway.
-
“Guns don’t kill people, people kill people”
Edit:
Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)
We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. "Guns don't kill people, people do." But some philosophers have argued that technology can have values built into it that we may not realise.
...
The philosopher Don Idhe says tech can open or close possibilities. It's not just about its function or who controls it. He says technology can provide a framework for action.
...
Martin Heidegger was a student of Husserl's, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don't even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.
Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you're typing on the screen. It's only when it breaks or it doesn't do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it's just the medium through which we experience the world.
Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don't experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.
...
Now some of you are looking at me like "Bull sh*t. A person using a hammer is just a person using a hammer!" But there might actually be some evidence from neurology to support this.
If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there's a visual stimulus near its hand start firing when there's a stimulus near the end of the rake, too! The monkey's brain extends its sense of the monkey body to include the tool!
And now here's the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.
A person using a hammer is actually a new subject with its own way of seeing - 'hammerman.' That's how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.
You think guns don't kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!
So if we're onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.
I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.
Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.
But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.
We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.
One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?
But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.
Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.
-
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
I don't know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn't somehow stop or end AI, but cause a new wave of innovation instead.
I've seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn't exactly die.
-
Siri has privacy issues, and only works when connected to the internet.
What are the downsides of me running my own local LLM? I've named many benefits privacy being one of them.
Voice recognition is not limited to siri, I just used the most know exemple. Local assitant have existed long before LLMs and didn't require this much ressources.
You are once again moving the goal post. Find one real world use that offset the downside. -
Much love
-
I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word "pronouns."
One of these topics is about class consciousness, those other is about human rights.
An AI is not a person.
Someone with they/them pronouns is a person.
They have no business being compared to one another!
-
One of these topics is about class consciousness, those other is about human rights.
An AI is not a person.
Someone with they/them pronouns is a person.
They have no business being compared to one another!
It's a comparison of people, not of subjects. In becoming blind with rage upon seeing the letters A and I you act the same as a conservative person seeing the word "pronouns."
-
It's a comparison of people, not of subjects. In becoming blind with rage upon seeing the letters A and I you act the same as a conservative person seeing the word "pronouns."
wrote last edited by [email protected]Well if baseless bitching can keep homophobia alive and well, then it’s clear the strategy works.
It is always better to see and to write a sound argument, but barring that, perpetuating negativity is pretty effective, esp. on the internet.
I see what you’re getting at, though!
-
Voice recognition is not limited to siri, I just used the most know exemple. Local assitant have existed long before LLMs and didn't require this much ressources.
You are once again moving the goal post. Find one real world use that offset the downside.I've already mention drafting documents and translating documents
-
So everything related to AI is negative ?
If so do you understand why we can't have any conversation on the subject ?
Did I say that?
Show me the place where I said that. Show it to me.
Come on. Show me the place where I said everything related to AI is negative. Show me even a place where you could reasonably construe that's what I meant.If you're talking about why we can't have a conversation, take a long hard look in the fucking mirror you goddamn hypocrite.
-
One of these topics is about class consciousness, those other is about human rights.
An AI is not a person.
Someone with they/them pronouns is a person.
They have no business being compared to one another!
Calling AI not a person is going to be a slur in the future, you insensitive meatbag
-
I didn't like that movie back then, I thought it was too on the nose and weird.
But wow, this has aged like fine wine, that clip was amazing
When are we going to have actual violence against androids
Yes. When I first saw it, I thought it was soppy, depressing and weird. Now I'm just wowed by the accurate portrayal of human nature.
When someone says that plants are people, they will be respected as spiritual or written off as a weirdo. Saying that animals are people make for some really contentious debates. But saying that people are people is something wars are fought over. We'll get there once androids are enough like us.
-
The Internet kind of was turned lose on an unsuspecting public. Social media has and still is causing a lot of harm.
Did you really compare every household having a nuclear reactor with people having access to AI?
How's is that even remotely a fair comparison.
To me the Internet being released on people and AI being released on people is more of a fair comparison.
Both can do lots of harm and good, both will probably cost a lot of people their jobs etc.
You know that the public got trickle-fed the internet for decades before it was ubiquitous in everyone house, and then another decade before it was ubiquitous in everyone's pocket. People had literal decades to learn how to protect themselves and for the job market to adjust. During that time, there was lots of research and information on how to protect yourself, and although regulation mostly failed to do anything, the learning material was adapted for all ages and was promoted.
Meanwhile LLMs are at least as impactful as the internet, and were released to the public almost without notice. Research on it's affects is being done now that it's already too late, and the public doesn't have any tools to protect itself. What meager material in appropriate use exist hasn't been well researched not adapted to all ages, when it isn't being presented as "the insane thoughts of doomer Luddites, not to be taken seriously" by the AI supporters.
The point is that people are being handed this catastrophically dangerous tool, without any training or even research into what the training should be. And we expect everything to be fine just because the tool is easy to use and convenient?
These companies are being allowed to bulldoze not just the economy, and the mental resilience of entire generations, for the sake of a bit of shareholder profit.
-
It's funny watching you AI bros climb over each other to be the first with a what about-ism.
Providing a counterexample to a claim is not whataboutism.
Whataboutism involves derailing a conversation with an ad-hominem to avoid addressing someone's argument, like what you just did.