Lemmy be like
-
We should be able to talk about the good and the bad.
Again, I point you to "implicit costs". Something this trivial isn't good if it's this expensive.
The different uses of AI are not inexctricable.
Many generative inpainting models will run locally
Continuing to treat AI as a monolith is missing the point.
-
The different uses of AI are not inexctricable.
Many generative inpainting models will run locally
Continuing to treat AI as a monolith is missing the point.
The value of the modern LLM is predicated on trained models. You can run the models locally. You can't run industry scale training locally.
Might as well say "The automotive industry isn't so bad if you just look at the carbon footprint of a single car". You're missing the forest for this one very small tree.
-
The value of the modern LLM is predicated on trained models. You can run the models locally. You can't run industry scale training locally.
Might as well say "The automotive industry isn't so bad if you just look at the carbon footprint of a single car". You're missing the forest for this one very small tree.
Generative inpainting doesn't typically employ an LLM. Only a few even use attention transformers. It costs in the range of $100,000 - $10 million to train a new diffusion or flow image model. Not cheap, but nothing crazy like training Opus or GPT 5.
-
Evil must be fought as long as it exists.
OK, but you're just making yourselves lolcows at this point where you announce these easy-to-push buttons & people derive joy from pushing them.
Imitating AI just to troll is a thing now.So…that's a victory?
-
LLMs aren't artificial intelligence in any way.
They're extremely complex and very smart prediction engines.
The term artificial intelligence has been co-opted in hijacked for marketing purposes a long time ago.
The kind of AI that in general people expect to see is a fully autonomous self-aware machine.
If anyone has used any llm for any extended period of time they will know immediately that they're not that smart even chatgpt arguably the smartest of them all is still highly incapable.
What we do have to come to terms with is that these llms do have an application they have function and they are useful and they can be used in a deleterious way just like any technology at all.
If a program that can predict prices for video games based on reviews and how many people bought it can be called AI long before 2021, LLMs can too
-
For those who know
I need to watch that video. I saw the first post but haven’t caught up yet.
-
Yeah, go cry about it. People use AI to help themselves while you’re just being technophobic, shouting ‘AI is bad’ without even saying which AI you mean. And you’re doing it on Lemmy, a tiny techno-bubble. Lmao.
No one is crying here aside some salty bitch of a techno-fetishist acting like his hard-on for environmental destruction and making people dumber is something to be proud of.
-
My employer is pushing AI usage, if the work is done the work is done. This is the reality we're supposed to be living in with AI, just conforming to the current predatory system because "AI bad" actively harms more than it helps.
The current predatory system will raise the limit on the 40 work week if they're allowed to. 60. 80. You might not even get a weekend. Unions fought for your weekend.
AI does not fundamentally change this relationship. It is the same predatory system.
-
I've already mention drafting documents and translating documents
Once again it's not enough to justify the cost.
LLM translation are hazardous at best and we already a lot of translation tools already.
Templating systems are older than me and even so no one in their right mind should trust a non deterministic tool to draft documents. -
I need to watch that video. I saw the first post but haven’t caught up yet.
it's just slacktivism no different than all the other facebook profile picture campaigns.
-
Are you honestly claiming a shitpost is gaslighting?
What a world we live in.
It's just a joke bro.
-
Then why are you guys avoiding a logical discussion around environmental impact instead of spouting misinformation?
The fact of the matter is eating a single steak or lb of ground beef will eclipse all most peoples AI usage. Obviously most can't escape driving, but for those of us in cities biking will far eclipse your environmental impact than not using AI.
Serving AI models aren't even as bad as watching Netflix, this counterculture to AI is largely misdirected anger that thrown towards unregulated capitalism. Unregulated data centers. Unregulated growth.
Training is bad but training is a small piece of the puzzle that happens infrequently, and again circles back to the unregulated problem.
It is easier to oppose a new thing than change ingrained habits.
If your house is on fire, it is reasonable to be mad at someone who throws a little torch onto it.
-
A good chunk of the Internet usage is HD videos which is far more power hungry than AI. I agree it's added on top...just like streaming did in 2010, and as things will continue to do.
Great, so why not oppose making things worse?
-
This post did not contain any content.
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
-
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
-
The same can be said for taking flights to go on holiday.
Flying emits way exponentially more CO2 and supports the oil industry
wrote last edited by [email protected]I just avoid both flights and AI in its current form.
-
it's just slacktivism no different than all the other facebook profile picture campaigns.
I have no idea about what’s being called for at all.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
Do you really need to have a list of why people are sick of LLM and Ai slop?
We don't need a collection of random 'AI bad' articles because your entire premise is flawed.
In general, people are not 'sick of LLM and Ai slop'. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.
Here is Stanford's report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
My dude, it sounds like you need to go out into the environment a bit more.
-
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
That would seem hypocritical if you're completely blind to poetic irony, yes.
-
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that's not as good for their start-ups or for humanity at large.