Lemmy be like
-
DB0 has a rather famous record of banning users who do not agree with AI. See [email protected] or others for many threads complaining about it.
You have no way of knowing what the scale would be as it's all a thought experiment, however, so let's play at that. if you see AI as a nearly universal good and want to encourage people to use it, why not incorporate it into things? Why not foist it into the state OS or whatever?
Buuuuut... keep in mind that in previous Communist regimes (even if you disagree that they were "real" Communists), what the state says will apply. If the state is actively pro-AI, then by default, you are using it. Are you too good to use what your brothers and sisters have said is good and will definitely 100% save labour? Are you wasteful, Comrade? Why do you hate your country?
wrote last edited by [email protected]Yes, I have seen posts on it. Sufficed to say, despite being an anarchist. I don't have an account there for reasons. And don't agree with everything they do.
The situation with those bans I might consider heavy handed and perhaps overreaching. But by the same token it's a bit of a reflection of some of those that are banned. Overzealous and lacking nuance etc.
The funny thing is. They pretty much dislike the tech bros as much as anyone here does. You generally won't ever find them defending their actions. They want AI etc that they can run from their home. Not snarfing up massive public resources, massively contributing to climate change, or stealing anyone's livelihood. Hell many of them want to run off the grid from wind and solar. But, as always happens with the left. We can agree with eachother 90%, but will never tolerate or understand because of the 10%.
PS
We do know the scale. Your use of "the state" with reference to anarchism. Implies you're unfamiliar with it. Anarchism and communism are against "the state" for the reasons you're also warry of it. It's too powerful and unanswerable.
-
We should ban computers since they are making mass surveillance easier. /s
we should allow lead in paint its easier to use /s
You are deliberatly missing my point which is : gen AI as an enormous amount of downside and no real world use.
-
You asked for one example, I gave you one.
It's not just voice, I can ask it complex questions and it can understand context and put on lights or close blinds based on that context.
I find it very useful with no real drawbacks
I ask for an example making up for the downside everyone as to pay.
so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it.
And again that's stuff siri and domotic tools where able to do since 2014 at a minimum. -
This post did not contain any content.
The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don't be treating it like it's some sort of irrational bandwagon.
-
AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.
The problem is that it's cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it's not "good" in your eyes.
Making a cheap or efficient AI doesn't help the end user in any way.
I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.
What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).
The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
-
This post did not contain any content.
I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word "pronouns."
-
Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.
Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.
-
Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.
This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.
wrote last edited by [email protected] -
I ask for an example making up for the downside everyone as to pay.
so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it.
And again that's stuff siri and domotic tools where able to do since 2014 at a minimum.Siri has privacy issues, and only works when connected to the internet.
What are the downsides of me running my own local LLM? I've named many benefits privacy being one of them.
-
I'm a lot more sick of the word 'slop' than I am of AI. Please, when you criticize AI, form an original thought next time.
Yes! Will people stop with their sloppy criticisms?
-
This post did not contain any content.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
wrote last edited by [email protected]Ai is literally making people dumber:
https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdfWe surveyed 319 knowledge workers who use GenAI tools (e.g.,
ChatGPT, Copilot) at work at least once per week, to model how
they enact critical thinking when using GenAI tools, and how GenAI
affects their perceived effort of thinking critically. Analysing 936
real-world GenAI tool use examples our participants shared, we
find that knowledge workers engage in critical thinking primarily
to ensure the quality of their work, e.g. by verifying outputs against
external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill
for independent problem-solving. Higher confidence in GenAI’s
ability to perform a task is related to less critical thinking effort.
When using GenAI tools, the effort invested in critical thinking
shifts from information gathering to information verification; from
problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges
in critical thinking as they incorporate GenAI into their knowledge
workflows. To that end, our work suggests that GenAI tools need
to be designed to support knowledge workers’ critical thinking by
addressing their awareness, motivation, and ability barriers.I would not say "can potentially lead to long-term overreliance on the tool and diminished skill
for independent problem-solving" equals to "literally making people dumber". A sample size of 319 isn't really representative anyways, and they mainly had a sample of a specific type of people. People switch from searching to verifying, which doesn't sound too bad if done correctly. They associate critical thinking with verifying everything ("Higher confidence in GenAI’s
ability to perform a task is related to less critical thinking effort"), not sure I agree on this.This study is also only aimed at people working instead of regular use. I personally discovered so many things with GenAI, and know to always question what the model says when it comes to specific topics or questions, because they tend to hallucinate. You could also say internet made people dumber, but those who know how to use it will be smarter.
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They had to write an essay in 20 minutes... obviously most people would just generate the whole thing and fix little problems here and there, but if you have to think less because you're just fixing stuff instead on inventing.. well yea, you use your brain less. Doesn't make you dumb? It's a bit like saying paying by card makes you dumber because you use less of your brain compared to paying in cash because you have to count how much you need to give, and how much you need to get back.
Yes, if you get helped by a tool or someone, it will be less intensive for your brain. Who would have thought?!
-
Rockstar games: 6k employees 20 kwatt hours per square foot https://esource.bizenergyadvisor.com/article/large-offices 150 square feet per employee https://unspot.com/blog/how-much-office-space-do-we-need-per-employee/#%3A~%3Atext=The+needed+workspace+may+vary+in+accordance
18,000,000,000 watt hours
vs
10,000,000,000 watt hours for ChatGPT training
https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/
Yet there's no hand wringing over the environmental destruction caused by 3d gaming.
Semi non sequitur argument aside, your math seems to be off.
I double checked my quick phone calculations and using figures provided, Rockstar games with their office space energy use is roughly 18,000,000 (18 million) kWh, not 18,000,000,000 (18 billion).
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
wrote last edited by [email protected]They are a massive privacy risk:
I do agree on this, but at this point everyone uses instagram, snapchat, discord and whatever to share their DMs which are probably being sniffed by the NSA and used by companies for profiling. People are never going to change.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
Are being used to push fascist ideologies into every aspect of the internet:
Everything can be used for that. If anything, I believe AI models are too restricted and tend not to argue on controversial subjects, which prevents you from learning anything. Censorship sucks
-
Don't be obtuse, you walnut. I'm obviously not equating medical technology with 12-fingered anime girls and plagiarism.
You still mix all AI stuff in, what about hating LLMs and image generators?
-
I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.
What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).
The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.
Ok. Thanks for clarifying.
Although I am pretty sure AI is already used in the medical field for research and diagnosis. This "AI everywhere" trend you are seeing is the result of everyone trying to stick and use AI in every which way.
The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.
-
Not clicking on a substack link. Fucking Nazi promoting shit website
-
I never said that.
All I'm saying is just because The Internet caused library use to plummet doesn't mean Internet = Bad.
It might. Like, maybe a little?
Oddly, you're the one kind of lacking nuance here. I'd be willing to oppose the Internet in certain contexts. It certainly feels less and less useful as it's consumed by AI spam anyway.