Lemmy be like
-
Those are not valuable use cases. “Devouring text” and generating images is not something that benefits from automation. Nor is summarization of text. These do not add value to human life and they don’t improve productivity. They are a complete red herring.
Who talked about image generation? That one is pretty much useless, for anything that needs to be generated on the fly like that, a stick figure would do.
Devouring text like that, has been instrumental in learning for my students, especially for the ones who have English as a Second Language(ESL), so its usability in teaching would be interesting to discuss.
Do I think general open LLMs are the future? Fuck no. Do I think they are useless and unjustifiable? Neither. I think, at their current state, they are a brilliant beta test on the dangers and virtues of large language models and how they interact with the human psyche, and how they can help bridge the gap in understanding, and how they can help minorities, especially immigrants and other oppressed groups(Hence why I advocated for providing a class on how to use it appropriately for my ESL students) bridge gaps in understanding, help them realize their potential, and have a better future.
However, we need to solve or at least reduce the grip Capitalism has on that technology. As long as it is fueled by Capitalism, enshitification, dark patterns and many other evils will strip it of its virtues, and sell them for parts.
-
That's like saying "asbestos has some good uses, so we should just give every household a big pile of it without any training or PPE"
Or "we know leaded gas harms people, but we think it has some good uses so we're going to let everyone access it for basically free until someone eventually figures out what those uses might be"
It doesn't matter that it has some good uses and that later we went "oops, maybe let's only give it to experts to use". The harm has already been done by eager supporters, intentional or not.
No that is completely not what they are saying. Stop arguing strawmen.
-
AI is bad and people who use it should feel bad.
When people say this they are usually talking about a very specific sort of generative LLM using unsupervised learning.
AI is a very broad field with great potential, the improvements in cancer screening alone could save millions of lives over the coming decades. At the core it's just math, and the equations have been in use for almost as long as we've had computers. It's no more good or bad than calculus or trigonometry.
-
I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.
There are MANY examples of LLM's being useful, it has its drawbacks just like any big technology, but saying it has no uses that aren't worth it, is ridiculous.
But we could do vocal assistants well before LLMs (look at siri) and without setting everything on fire.
And seriously, I asked for something that's worth all the down side and you bring up clippy 2.0 ???
Where are the MANY exemples ? why are LLMs/genAI company burning money ? where are the companies making use of of the suposedly many uses ?
I genuily want to understand.
-
This post did not contain any content.
synthophobes are easily manipulated
-
It will happen regardless because we are not machines, we don't follow theory, laws, instructions or whatever a system tells us to perfectly and without little changes here and there.
I think you are underestimating how adaptable humans are. We absolutely conform to the systems that govern us, and they are NOT equally likely to produce bad outcomes.
-
But he isn't speaking the truth. AI itself is a massive strain on the environment, without any true benefit. You are being fed hype and lies by con men. Data centers being built to supply AIs are using water and electricity at alarming rates, taking away the resources from actual people living nearby, and raising the cost of those utilities at the same time.
https://www.realtor.com/advice/finance/ai-data-centers-homeowner-electric-bills-link/
The problem is the companies building the data centers; they would be just as happy to waste the water and resources mining crypto or hosting cloud gaming, if not for AI it would be something else.
In China they're able to run DeepSeek without any water waste, because they cool the data centers with the ocean. DeepSeek also uses a fraction of the energy per query and is investing in solar and other renewables for energy.
AI is certainly an environmental issue, but it's only the most recent head of the big tech hydra.
-
https://en.m.wikipedia.org/wiki/Intelligence
Take your pick from anything that isn't recent and by computer scientists or mathematicians, to call stuff intelligent that clearly isn't. According to some modern marketing takes I developed AI 20 years ago (optimizing search problems for agentic systems); it's just that my peers and I weren't stupid enough to call the results intelligent.
Yeah I read from that Wiki page — also from intelligence etymology and I totally get comments like yours. However saying LLMs are not AI and other kind of stuff are not AI can't be accepted and often can lead to misunderstanding to non-techies. On the same Wiki page, there's also mentioning about "Artificial", since it's artifical e.g. not created by nature and not having complex system like us humans, then LLMs can still be categorized as AI. Of course it will still have flaws tho. I'm here not to stand with LLMs but rather just want to tell people that terms misusage that I see oftentimes misleading and can spread misinformation. Let alone those big techs saying AI this and AI that whilst it's just a subset of AI like LLMs, I just don't want people here also falling in the same hole like those big techs that are using wrong terms in technology.
-
It will happen regardless because we are not machines, we don't follow theory, laws, instructions or whatever a system tells us to perfectly and without little changes here and there.
I see, so you don't understand. Or simply refuse to engage with what was asked.
-
When people say this they are usually talking about a very specific sort of generative LLM using unsupervised learning.
AI is a very broad field with great potential, the improvements in cancer screening alone could save millions of lives over the coming decades. At the core it's just math, and the equations have been in use for almost as long as we've had computers. It's no more good or bad than calculus or trigonometry.
No hope commenting like this, just get ready getting downvoted with no reason. People use wrong terms and normalize it.
-
They factually are. ML is AI. I think you mean AGI maybe?
AI > ML > DL > GenAi.
AI is a generic term for any LLM followed by Machine Learning, Deep Learning and Generative AI.
The hate against AI is hilariously misinformed.
-
Not really, since "AI" is a pre-existing and MUCH more general term which has been intentionally commandeered by bad actors to mean a particular type of AI.
AI remains a broader field of study.
I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.
-
AI is bad and people who use it should feel bad.
wrote last edited by [email protected]So is eating meat, flying, gaming, going om holiday, basically if you exist you should feel bad
-
But we could do vocal assistants well before LLMs (look at siri) and without setting everything on fire.
And seriously, I asked for something that's worth all the down side and you bring up clippy 2.0 ???
Where are the MANY exemples ? why are LLMs/genAI company burning money ? where are the companies making use of of the suposedly many uses ?
I genuily want to understand.
You asked for one example, I gave you one.
It's not just voice, I can ask it complex questions and it can understand context and put on lights or close blinds based on that context.
I find it very useful with no real drawbacks
-
I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.
This hype cycle is insane, and the gross psychology of the hype obscures the real usefulness of LLMs.
-
But he isn't speaking the truth. AI itself is a massive strain on the environment, without any true benefit. You are being fed hype and lies by con men. Data centers being built to supply AIs are using water and electricity at alarming rates, taking away the resources from actual people living nearby, and raising the cost of those utilities at the same time.
https://www.realtor.com/advice/finance/ai-data-centers-homeowner-electric-bills-link/
And your car or flight is a massive strain on the environment. I think you're missing the point. There's a way to use tools responsibly. We've taken the chains off and that's obviously a problem but the AI hate here is irrational
-
I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.
There are MANY examples of LLM's being useful, it has its drawbacks just like any big technology, but saying it has no uses that aren't worth it, is ridiculous.
I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.
Neat trick, but it's not worth the headache of set up when you can do all that by getting off your chair and pushing buttons. Hell, you don't even have to get off your chair! A cellphone can do all that already, and you don't even need voice commands to do it.
Are you able to give any actual examples of a good use of an LLM?
-
This hype cycle is insane, and the gross psychology of the hype obscures the real usefulness of LLMs.
As a non-English main, Deepl is useful for my locals (and for me). It's just how it's implemented. Still being open-minded, yeah, the extensive resource usage is bad for the earth tho, wishing there would be optimization.
-
AI uses 1/1000 the power of a microwave.
Are you really sure you aren't the one being fed lies by con men?
wrote last edited by [email protected]Hi. I'm in charge of an IT firm that is been contracted to carry out one of these data centers somewhat unwillingly in our city. We are currently in the groundbreaking phase but I am looking at papers and power requirements. You are absolutely wrong on the power requirements unless you mean per query on a light load on an easy plan, but these will be handling millions if not billions of queries per day. Keeping in mind that a single user query can also be dozens, hundreds, or thousands of separate queries... Generating a single image is dramatically more than you are stating.
Edit: I don't think your statement addresses the amount of water it requires as well. There are serious concerns that our massive water reservoir and lake near where I live will not even be close to enough.
Edit 2: Also, we were told to spec for at least 10x growth within the next 5 years which, unless there are massive gains in efficiency, I don't think there are any places on the planet capable of meeting the needs of, even if the models become substantially more efficient.
-
This post did not contain any content.
Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.