Lemmy be like
-
What is the gaslighting here? A trend, or the act of pointing out a trend, do not seem like gaslighting to me. At most it seems like bandwagon propaganda or the satire thereof.
For the second paragraph, I agree we (Lemmings) are all pretty against it and we can be echo-chambery about it. You know, like Linux!
But I would also DISagree that we (population of earth) are all against it.
wrote last edited by [email protected]It seems like the most immature and toxic thing to me to invoke terms like "gaslighting," ironically "toxic," and all the other terms you associate with these folks, defensively and for any reason, whether it aligns with what the word actually means or not. Like a magic phrase that instantly makes the person you use it against evil, manipulative and abusive, and the person that uses it a moral saint and vulnerable victim. While indirectly muting all those who have genuine uses for the terms. Or i'm just going mad exaggerating, and it's just the typical over- and mis-using of words.
Anyhow, sadly necessary disclaimer, i agree with almost all of the current criticism raised against AI, and my disagreements are purely against mischaracterizations of the underlying technology.
EDIT: I just reminded myself of when a teacher went ballistic at class for misusing the term "antisocial," saying we're eroding and polluting all genuine and very serious uses of the term. Hm, yeah it's probably just that same old thing. Not wrong for going ballistic over it, though.
-
Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that's not as good for their start-ups or for humanity at large.
AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.
Google and OpenAI are also both developing world models.
These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.
Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot's builder could create a world model representation of their robot's body's physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.
It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.
I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.
On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.
-
No one is crying here aside some salty bitch of a techno-fetishist acting like his hard-on for environmental destruction and making people dumber is something to be proud of.
OK, Mr. Big Brain Misuse-of-Terms. no point talking to someone who already thinks they know everything. Enjoy the echo chamber, lol.
-
That would seem hypocritical if you're completely blind to poetic irony, yes.
it doesnt seem hypocritical. It is
-
I have no idea about what’s being called for at all.
Search for clippy and rossman
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
We don't need a collection of random 'AI bad' articles because your entire premise is flawed.
In general, people are not 'sick of LLM and Ai slop'. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.
Here is Stanford's report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
My dude, it sounds like you need to go out into the environment a bit more.
wrote last edited by [email protected]My dude, it sounds like you need to go out into the environment a bit more.
oh you have a spare ecosystem in the closet for when this one is entirely fucked huh?
https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollutionstop acting like it's a rumor. the problem is real, it's already here, they're already crashing to build the data centers - so what, we can get taylor swift grok porn? nothing in that graph supports your premise either.
That's stanford graph is based on queries from 2022 and 2023 - it's 2025 here in reality. Wake up. Times change.
-
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.
yeah this shit's working out GREAT
-
Once again it's not enough to justify the cost.
LLM translation are hazardous at best and we already a lot of translation tools already.
Templating systems are older than me and even so no one in their right mind should trust a non deterministic tool to draft documents.That's simply not true,
What translation tool is better at translating English to Afrikaans?I'm just just picking a difficult language, I am Afrikaans look at my post history.
-
I just avoid both flights and AI in its current form.
wrote last edited by [email protected]Do you stream HD video, or Game or eat meat?
Because those footprints are more than if you'd use AI a lot.Not saying you should use AI, just pointing out a hypocrisy I see on here a lot
-
That's simply not true,
What translation tool is better at translating English to Afrikaans?I'm just just picking a difficult language, I am Afrikaans look at my post history.
Are you going to neat-pick a point each time rather than addressing my argument as a whole ?
I'm french and I can tell you in a software developement context AI is worse than existing tool like deepL, maybe it work better in afrikaans and if that's the case good we finally have an use case !
sadly being ok at translating thing is not what thoose model are selled on and even if it were : it's still not worth the costI'll stop responding, no one is reading that far of a comment and you are not responding to my arguments in way that's productive.
-
Are you going to neat-pick a point each time rather than addressing my argument as a whole ?
I'm french and I can tell you in a software developement context AI is worse than existing tool like deepL, maybe it work better in afrikaans and if that's the case good we finally have an use case !
sadly being ok at translating thing is not what thoose model are selled on and even if it were : it's still not worth the costI'll stop responding, no one is reading that far of a comment and you are not responding to my arguments in way that's productive.
DeepL has the same issues that a LLM has when it comes to translating.
You're still sending all your data to some server, it might be a bit more efficient than a LLM, not sure by how much. but it's essentially the same thing.
DeepL is essentially just a LLM specifically tuned for translations
-
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
Wouldn't posting articles about AI making up bullshit support their claim that AI makes up bullshit?
-
Run your own AI!
Oh sure, let me just pull a couple billion out of the couch cushions to spin up a data center in the middle of the desert.
wrote last edited by [email protected]Comments like this remind me of all the blockchain hate. People with no idea what they were talking about inventing justifications for hating something they were unwilling to understand.
There are so many legitimate reasons to criticize both and people still make shit up on the fly. -
AI itself is a massive strain on the environment, without any true benefit
Rockstar games developing GTA5: 6k employees 20 kwatt hours per square foot https://esource.bizenergyadvisor.com/article/large-offices 150 square feet per employee https://unspot.com/blog/how-much-office-space-do-we-need-per-employee/#%3A~%3Atext=The+needed+workspace+may+vary+in+accordance
18,000,000,000 watt hours
vs
10,000,000,000 watt hours for ChatGPT training
https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/
There are more 3d games developed each year than companies releasing new AI models.
You're getting downvoted for providing a well founded argument that should facilitate a broader discussion. Jesus Christ what are we doing here, people?
-
Every system eventually ends with someone corrupted with power and greed wanting more. Putin and his oligrachs, Trump and his oligarchs... Xi isn't great, but at least I haven't heard news about the Uyghurs situation for a couple of years now. Hope things are better there nowadays and people aren't going missing anymore just for speaking out against their government.
wrote last edited by [email protected]Time doesn't end with corrupt power, those are just things that happen. Bad shit always happens, it's the Why, How Often and How We Fix It that are more indicative of success. Every machine breaks down eventually.
-
Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart... So alas we will always be in complete shit until we disappear.
wrote last edited by [email protected]The fittest survive. The problem is creating systems where the best fit are people who lack empathy and and a moral code.
A better solution would be selecting world leaders from the population at random.
-
I'm anti-censorship. Someone agreed with me because they were banned for anti-trans statements.
Just because someone's on your team doesn't mean they're going to help you.
-
Extreme oversimplification. Hammers don't kill the planet by simply existing.
And neither does AI? The massive data centers are having negative impacts on local economies, resources and the environment.
Just like a massive hammer factory, mines for the metals, logging for handles and manufacturing for all the chemicals, paints and varnishes have a negative environmental impact.
Saying something kills the planet by existing is an extreme hyperbole.
-
Do you really need to have a list of why people are sick of LLM and Ai slop?
We don't need a collection of random 'AI bad' articles because your entire premise is flawed.
In general, people are not 'sick of LLM and Ai slop'. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.
Here is Stanford's report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).
Stop being a corporate apologist and stop wreaking the environment with this shit technology.
My dude, it sounds like you need to go out into the environment a bit more.
We don’t need a collection of random ‘AI bad’ articles because your entire premise is flawed.
god forbid you have evidence to support your premise. huh.
-
Because I used AI slop to create this shitpost lol.
So naturally it would make mistake.There are other mistakes in the image too
wrote last edited by [email protected]Makes for a confusing cartoon. I browsed too many of the comments thinking everyone knew what 3251 means except me. I thought a route 3252 road sign fell on him.