Lemmy be like
-
I don’t see the relevance here
Anthropic is losing $3 billion or more after revenue in 2025
OpenAI is on track to lose more than $10 billion.
xAI, makers of “Grok, the racist LLM,” losing it over $1 billion a month.
I don't know that generative infill justifies these losses.
The different uses of AI are not inexctricable. This is the point of the post. We should be able to talk about the good and the bad.
-
As in Genuine Leather?
It is leather, just 1mm of it.
-
The different uses of AI are not inexctricable. This is the point of the post. We should be able to talk about the good and the bad.
We should be able to talk about the good and the bad.
Again, I point you to "implicit costs". Something this trivial isn't good if it's this expensive.
-
Define people. Because obviously people don't here. The average person I talk to IRL on a daily basis don't know what it is, have never used it, and likely never will. And a system where the people currently pushing this wouldn't exist would certainly change things.
Your argument basically amounts to "nu uh".
The average person I talk to IRL on a daily basis don’t know what it is, have never used it, and likely never will.
ChatGPT.com is visited approximately 5.24 billion times each month. That makes it bigger than Twitter, Instagram, and even Wikipedia.
https://explodingtopics.com/blog/chatgpt-users
I don't use Twitter and don't know anyone that does but that doesn't mean it isn't popular.
Your argument basically amounts to “nu uh”.
ChatGPT has been the biggest Internet thing since Google. If it wasn't, we wouldn't even be talking about it here. I shouldn't have to quote statistics for something well known.
-
Run your own AI!
Oh sure, let me just pull a couple billion out of the couch cushions to spin up a data center in the middle of the desert.
I linked it in this thread but here it is again.
https://www.youtube.com/watch?v=T17bpGItqXw
There is a huge open source community working on LLM's.
-
We should be able to talk about the good and the bad.
Again, I point you to "implicit costs". Something this trivial isn't good if it's this expensive.
The different uses of AI are not inexctricable.
Many generative inpainting models will run locally
Continuing to treat AI as a monolith is missing the point.
-
The different uses of AI are not inexctricable.
Many generative inpainting models will run locally
Continuing to treat AI as a monolith is missing the point.
The value of the modern LLM is predicated on trained models. You can run the models locally. You can't run industry scale training locally.
Might as well say "The automotive industry isn't so bad if you just look at the carbon footprint of a single car". You're missing the forest for this one very small tree.
-
The value of the modern LLM is predicated on trained models. You can run the models locally. You can't run industry scale training locally.
Might as well say "The automotive industry isn't so bad if you just look at the carbon footprint of a single car". You're missing the forest for this one very small tree.
Generative inpainting doesn't typically employ an LLM. Only a few even use attention transformers. It costs in the range of $100,000 - $10 million to train a new diffusion or flow image model. Not cheap, but nothing crazy like training Opus or GPT 5.
-
Evil must be fought as long as it exists.
OK, but you're just making yourselves lolcows at this point where you announce these easy-to-push buttons & people derive joy from pushing them.
Imitating AI just to troll is a thing now.So…that's a victory?
-
LLMs aren't artificial intelligence in any way.
They're extremely complex and very smart prediction engines.
The term artificial intelligence has been co-opted in hijacked for marketing purposes a long time ago.
The kind of AI that in general people expect to see is a fully autonomous self-aware machine.
If anyone has used any llm for any extended period of time they will know immediately that they're not that smart even chatgpt arguably the smartest of them all is still highly incapable.
What we do have to come to terms with is that these llms do have an application they have function and they are useful and they can be used in a deleterious way just like any technology at all.
If a program that can predict prices for video games based on reviews and how many people bought it can be called AI long before 2021, LLMs can too
-
For those who know
I need to watch that video. I saw the first post but haven’t caught up yet.
-
Yeah, go cry about it. People use AI to help themselves while you’re just being technophobic, shouting ‘AI is bad’ without even saying which AI you mean. And you’re doing it on Lemmy, a tiny techno-bubble. Lmao.
No one is crying here aside some salty bitch of a techno-fetishist acting like his hard-on for environmental destruction and making people dumber is something to be proud of.
-
My employer is pushing AI usage, if the work is done the work is done. This is the reality we're supposed to be living in with AI, just conforming to the current predatory system because "AI bad" actively harms more than it helps.
The current predatory system will raise the limit on the 40 work week if they're allowed to. 60. 80. You might not even get a weekend. Unions fought for your weekend.
AI does not fundamentally change this relationship. It is the same predatory system.
-
I've already mention drafting documents and translating documents
Once again it's not enough to justify the cost.
LLM translation are hazardous at best and we already a lot of translation tools already.
Templating systems are older than me and even so no one in their right mind should trust a non deterministic tool to draft documents. -
I need to watch that video. I saw the first post but haven’t caught up yet.
it's just slacktivism no different than all the other facebook profile picture campaigns.
-
Are you honestly claiming a shitpost is gaslighting?
What a world we live in.
It's just a joke bro.
-
Then why are you guys avoiding a logical discussion around environmental impact instead of spouting misinformation?
The fact of the matter is eating a single steak or lb of ground beef will eclipse all most peoples AI usage. Obviously most can't escape driving, but for those of us in cities biking will far eclipse your environmental impact than not using AI.
Serving AI models aren't even as bad as watching Netflix, this counterculture to AI is largely misdirected anger that thrown towards unregulated capitalism. Unregulated data centers. Unregulated growth.
Training is bad but training is a small piece of the puzzle that happens infrequently, and again circles back to the unregulated problem.
It is easier to oppose a new thing than change ingrained habits.
If your house is on fire, it is reasonable to be mad at someone who throws a little torch onto it.
-
A good chunk of the Internet usage is HD videos which is far more power hungry than AI. I agree it's added on top...just like streaming did in 2010, and as things will continue to do.
Great, so why not oppose making things worse?
-
This post did not contain any content.
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
-
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.
I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.
For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).
This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.
Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ
Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w
Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.
AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.