Lemmy be like
-
You asked for one example, I gave you one.
It's not just voice, I can ask it complex questions and it can understand context and put on lights or close blinds based on that context.
I find it very useful with no real drawbacks
I ask for an example making up for the downside everyone as to pay.
so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it.
And again that's stuff siri and domotic tools where able to do since 2014 at a minimum. -
This post did not contain any content.
The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don't be treating it like it's some sort of irrational bandwagon.
-
AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.
The problem is that it's cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it's not "good" in your eyes.
Making a cheap or efficient AI doesn't help the end user in any way.
I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.
What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).
The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
-
This post did not contain any content.
I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word "pronouns."
-
Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.
Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.
-
Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.
This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.
wrote last edited by [email protected] -
I ask for an example making up for the downside everyone as to pay.
so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it.
And again that's stuff siri and domotic tools where able to do since 2014 at a minimum.Siri has privacy issues, and only works when connected to the internet.
What are the downsides of me running my own local LLM? I've named many benefits privacy being one of them.
-
I'm a lot more sick of the word 'slop' than I am of AI. Please, when you criticize AI, form an original thought next time.
Yes! Will people stop with their sloppy criticisms?
-
This post did not contain any content.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
wrote last edited by [email protected]Ai is literally making people dumber:
https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdfWe surveyed 319 knowledge workers who use GenAI tools (e.g.,
ChatGPT, Copilot) at work at least once per week, to model how
they enact critical thinking when using GenAI tools, and how GenAI
affects their perceived effort of thinking critically. Analysing 936
real-world GenAI tool use examples our participants shared, we
find that knowledge workers engage in critical thinking primarily
to ensure the quality of their work, e.g. by verifying outputs against
external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill
for independent problem-solving. Higher confidence in GenAI’s
ability to perform a task is related to less critical thinking effort.
When using GenAI tools, the effort invested in critical thinking
shifts from information gathering to information verification; from
problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges
in critical thinking as they incorporate GenAI into their knowledge
workflows. To that end, our work suggests that GenAI tools need
to be designed to support knowledge workers’ critical thinking by
addressing their awareness, motivation, and ability barriers.I would not say "can potentially lead to long-term overreliance on the tool and diminished skill
for independent problem-solving" equals to "literally making people dumber". A sample size of 319 isn't really representative anyways, and they mainly had a sample of a specific type of people. People switch from searching to verifying, which doesn't sound too bad if done correctly. They associate critical thinking with verifying everything ("Higher confidence in GenAI’s
ability to perform a task is related to less critical thinking effort"), not sure I agree on this.This study is also only aimed at people working instead of regular use. I personally discovered so many things with GenAI, and know to always question what the model says when it comes to specific topics or questions, because they tend to hallucinate. You could also say internet made people dumber, but those who know how to use it will be smarter.
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They had to write an essay in 20 minutes... obviously most people would just generate the whole thing and fix little problems here and there, but if you have to think less because you're just fixing stuff instead on inventing.. well yea, you use your brain less. Doesn't make you dumb? It's a bit like saying paying by card makes you dumber because you use less of your brain compared to paying in cash because you have to count how much you need to give, and how much you need to get back.
Yes, if you get helped by a tool or someone, it will be less intensive for your brain. Who would have thought?!
-
Rockstar games: 6k employees 20 kwatt hours per square foot https://esource.bizenergyadvisor.com/article/large-offices 150 square feet per employee https://unspot.com/blog/how-much-office-space-do-we-need-per-employee/#%3A~%3Atext=The+needed+workspace+may+vary+in+accordance
18,000,000,000 watt hours
vs
10,000,000,000 watt hours for ChatGPT training
https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/
Yet there's no hand wringing over the environmental destruction caused by 3d gaming.
Semi non sequitur argument aside, your math seems to be off.
I double checked my quick phone calculations and using figures provided, Rockstar games with their office space energy use is roughly 18,000,000 (18 million) kWh, not 18,000,000,000 (18 billion).
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
wrote last edited by [email protected]They are a massive privacy risk:
I do agree on this, but at this point everyone uses instagram, snapchat, discord and whatever to share their DMs which are probably being sniffed by the NSA and used by companies for profiling. People are never going to change.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
Are being used to push fascist ideologies into every aspect of the internet:
Everything can be used for that. If anything, I believe AI models are too restricted and tend not to argue on controversial subjects, which prevents you from learning anything. Censorship sucks
-
Don't be obtuse, you walnut. I'm obviously not equating medical technology with 12-fingered anime girls and plagiarism.
You still mix all AI stuff in, what about hating LLMs and image generators?
-
I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.
What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).
The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.
Ok. Thanks for clarifying.
Although I am pretty sure AI is already used in the medical field for research and diagnosis. This "AI everywhere" trend you are seeing is the result of everyone trying to stick and use AI in every which way.
The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.
-
Not clicking on a substack link. Fucking Nazi promoting shit website
-
I never said that.
All I'm saying is just because The Internet caused library use to plummet doesn't mean Internet = Bad.
It might. Like, maybe a little?
Oddly, you're the one kind of lacking nuance here. I'd be willing to oppose the Internet in certain contexts. It certainly feels less and less useful as it's consumed by AI spam anyway.
-
“Guns don’t kill people, people kill people”
Edit:
Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)
We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. "Guns don't kill people, people do." But some philosophers have argued that technology can have values built into it that we may not realise.
...
The philosopher Don Idhe says tech can open or close possibilities. It's not just about its function or who controls it. He says technology can provide a framework for action.
...
Martin Heidegger was a student of Husserl's, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don't even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.
Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you're typing on the screen. It's only when it breaks or it doesn't do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it's just the medium through which we experience the world.
Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don't experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.
...
Now some of you are looking at me like "Bull sh*t. A person using a hammer is just a person using a hammer!" But there might actually be some evidence from neurology to support this.
If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there's a visual stimulus near its hand start firing when there's a stimulus near the end of the rake, too! The monkey's brain extends its sense of the monkey body to include the tool!
And now here's the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.
A person using a hammer is actually a new subject with its own way of seeing - 'hammerman.' That's how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.
You think guns don't kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!
So if we're onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.
I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.
Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.
But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.
We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.
One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?
But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.
Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.
-
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
I don't know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn't somehow stop or end AI, but cause a new wave of innovation instead.
I've seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn't exactly die.