Lemmy be like
-
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
-
The same can be said for taking flights to go on holiday.
Flying emits way exponentially more CO2 and supports the oil industry
wrote last edited by [email protected]I just avoid both flights and AI in its current form.
-
it's just slacktivism no different than all the other facebook profile picture campaigns.
I have no idea about what’s being called for at all.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
Ai is literally making people dumber:
https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/
They are a massive privacy risk:
https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s
Are being used to push fascist ideologies into every aspect of the internet:
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
And they are a massive environmental disaster:
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.
Do you really need to have a list of why people are sick of LLM and Ai slop?
We don't need a collection of random 'AI bad' articles because your entire premise is flawed.
In general, people are not 'sick of LLM and Ai slop'. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.
Here is Stanford's report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
My dude, it sounds like you need to go out into the environment a bit more.
-
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
That would seem hypocritical if you're completely blind to poetic irony, yes.
-
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that's not as good for their start-ups or for humanity at large.
-
Run your own AI!
Oh sure, let me just pull a couple billion out of the couch cushions to spin up a data center in the middle of the desert.
Oh sure, let me just pull a couple billion out of the couch cushions to spin up a data center in the middle of the desert.
From my, very much not in a data center, desktop PC:
-
What is the gaslighting here? A trend, or the act of pointing out a trend, do not seem like gaslighting to me. At most it seems like bandwagon propaganda or the satire thereof.
For the second paragraph, I agree we (Lemmings) are all pretty against it and we can be echo-chambery about it. You know, like Linux!
But I would also DISagree that we (population of earth) are all against it.
wrote last edited by [email protected]It seems like the most immature and toxic thing to me to invoke terms like "gaslighting," ironically "toxic," and all the other terms you associate with these folks, defensively and for any reason, whether it aligns with what the word actually means or not. Like a magic phrase that instantly makes the person you use it against evil, manipulative and abusive, and the person that uses it a moral saint and vulnerable victim. While indirectly muting all those who have genuine uses for the terms. Or i'm just going mad exaggerating, and it's just the typical over- and mis-using of words.
Anyhow, sadly necessary disclaimer, i agree with almost all of the current criticism raised against AI, and my disagreements are purely against mischaracterizations of the underlying technology.
EDIT: I just reminded myself of when a teacher went ballistic at class for misusing the term "antisocial," saying we're eroding and polluting all genuine and very serious uses of the term. Hm, yeah it's probably just that same old thing. Not wrong for going ballistic over it, though.
-
Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that's not as good for their start-ups or for humanity at large.
AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.
Google and OpenAI are also both developing world models.
These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.
Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot's builder could create a world model representation of their robot's body's physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.
It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.
I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.
On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.
-
No one is crying here aside some salty bitch of a techno-fetishist acting like his hard-on for environmental destruction and making people dumber is something to be proud of.
OK, Mr. Big Brain Misuse-of-Terms. no point talking to someone who already thinks they know everything. Enjoy the echo chamber, lol.
-
That would seem hypocritical if you're completely blind to poetic irony, yes.
it doesnt seem hypocritical. It is
-
I have no idea about what’s being called for at all.
Search for clippy and rossman
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
We don't need a collection of random 'AI bad' articles because your entire premise is flawed.
In general, people are not 'sick of LLM and Ai slop'. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.
Here is Stanford's report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
My dude, it sounds like you need to go out into the environment a bit more.
wrote last edited by [email protected]My dude, it sounds like you need to go out into the environment a bit more.
oh you have a spare ecosystem in the closet for when this one is entirely fucked huh?
https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollutionstop acting like it's a rumor. the problem is real, it's already here, they're already crashing to build the data centers - so what, we can get taylor swift grok porn? nothing in that graph supports your premise either.
That's stanford graph is based on queries from 2022 and 2023 - it's 2025 here in reality. Wake up. Times change.
-
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
yeah this shit's working out GREAT
-
Once again it's not enough to justify the cost.
LLM translation are hazardous at best and we already a lot of translation tools already.
Templating systems are older than me and even so no one in their right mind should trust a non deterministic tool to draft documents.That's simply not true,
What translation tool is better at translating English to Afrikaans?I'm just just picking a difficult language, I am Afrikaans look at my post history.
-
I just avoid both flights and AI in its current form.
wrote last edited by [email protected]Do you stream HD video, or Game or eat meat?
Because those footprints are more than if you'd use AI a lot.Not saying you should use AI, just pointing out a hypocrisy I see on here a lot
-
That's simply not true,
What translation tool is better at translating English to Afrikaans?I'm just just picking a difficult language, I am Afrikaans look at my post history.
Are you going to neat-pick a point each time rather than addressing my argument as a whole ?
I'm french and I can tell you in a software developement context AI is worse than existing tool like deepL, maybe it work better in afrikaans and if that's the case good we finally have an use case !
sadly being ok at translating thing is not what thoose model are selled on and even if it were : it's still not worth the costI'll stop responding, no one is reading that far of a comment and you are not responding to my arguments in way that's productive.
-
Are you going to neat-pick a point each time rather than addressing my argument as a whole ?
I'm french and I can tell you in a software developement context AI is worse than existing tool like deepL, maybe it work better in afrikaans and if that's the case good we finally have an use case !
sadly being ok at translating thing is not what thoose model are selled on and even if it were : it's still not worth the costI'll stop responding, no one is reading that far of a comment and you are not responding to my arguments in way that's productive.
DeepL has the same issues that a LLM has when it comes to translating.
You're still sending all your data to some server, it might be a bit more efficient than a LLM, not sure by how much. but it's essentially the same thing.
DeepL is essentially just a LLM specifically tuned for translations
-
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
Wouldn't posting articles about AI making up bullshit support their claim that AI makes up bullshit?
-
Run your own AI!
Oh sure, let me just pull a couple billion out of the couch cushions to spin up a data center in the middle of the desert.
wrote last edited by [email protected]Comments like this remind me of all the blockchain hate. People with no idea what they were talking about inventing justifications for hating something they were unwilling to understand.
There are so many legitimate reasons to criticize both and people still make shit up on the fly.