Is It Just Me?
-
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
facts tend to be unconvincing when you consider fantasies like "LLMs are being powered by green energy" a reality.
-
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
Lemmy is a lost cause for nuanced takes on "AI". It's all just rage now.
-
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.First post on a lemmy server, by the way. Hello!
Welcome in! Hope you're finding Lemmy in a positive way. It's like Reddit, but you have a lot more control over what you can block and where you can make a "home" (aka home instance).
Feel free to reach out if you have any questions about anything
-
The highly specialized "ai" that was used to make the COVID vaccine is probably significantly stupider than chatgpt two years ago.
Nothing I said was a false equivalency, though I'm pretty sure you don't even know what a false equivalent is.
Trying to compare the intelligence of a specialized, single purpose AI to an LLM is asinine, and shows you don't really know what you're talking about. Just like how it's asinine to equate a technology that pervades every facet of our lives, personal and professional, without our consent or control, to cars and guns.
-
I find it telling that the best rebuttal anyone can come up with to my comment is to say it's a "shit take."
I mean, wow.
-
This post did not contain any content.
i remember this same conversation once the internet became a thing.
-
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I'm not advocating for it, I'm pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It's like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It's the industry as a whole exploiting consumer habits. AI users are no different.
And what do you think mass adoption of AI is gonna lead to, now you won't even have 3 jobs to make rent cause they outsourced yours to someone cheaper using an AI agent, this is gonna permanently alter how our society works and not for the better
-
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
The energy use is quite tiny
It literally is not. If you're talking about interacting with trained models, then sure but that's a different thing altogether. That's not what the energy use problem is.
Meh all of it is very unconvincing.
Maybe you haven't taken the time to read the articles. Or perhaps climate, ethics, and economic disaster don't mean very much to you. Which - maybe that's the case, but you also can't say they're not huge problems. You can say "i don't care" but that's different than "these facts aren't real."
-
Yeah, and I noticed it didn’t describe the image at all
How would you state it over the phone?
Alt text is a succinct alternative that conveys (accurate/equivalent) meaning in context, much like reading a comment with an image to someone over the phone.
If you would have said that "Simpsons meme of an old man yelling at a cloud", then that would also suffice.
It doesn't need to go into elaborate detail.In those discussions, people often talk about having enough, losing their minds, it making people dumber, too.
I get it helps to feel recognized, so would it feel better to broaden the reach of that message for more recognition?How would you state it over the phone?
"A screenshot of The Simpsons showing a hand holding a newspaper article featuring a picture of Grandpa Simpson shaking his fist at the sky and scowling, with the headline 'Old Man Yells At Clouds'"
It doesn't need to go into elaborate detail.
It depends on how much you care that someone who needs or wants the alt text needs to know.
so would it feel better to broaden the reach of that message for more recognition?
absolutely. And, ironically, one of the possible use cases of AI where it might-sort-of-kinda-work-okay-to-help-although-it-needs-work-because-it's-still-kind-of-sucky.
-
You can't dispell irrational thoughts through rational arguments. People hate LLMs because they feel left behind which is an absolutely valid concern but expressed poorly.
People hate LLMs because they feel left behind
HAHAHAh! Wow.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
It's a tool being used by humans.
Nailed it.
It's not making anyone dumber or smarter.
Absolutely incorrect.
I'm so tired of this anti ai bullshit.
That's what OP says too, only the other way around.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
Machine Learning, or Data Science, is not what "anti-AI" is about. You can acknowledge that or keep being confused.
These are just tools they're as good and as bad as the people using them.
In a vacuum. We don't live in a vacuum. (no not the thing that you push around the house to clean the carpet. That's also a tool. And the vacuum industry didn't blow three hundred billion dollars on a vacuum concept that sort of works sometimes.)
So yes, it is just you and a select few smooth brains that can't see past their own bias.
Yeah they're so unfair to the ubiquitous tech companies that dominate their waking lives. I too support the unregulated billionaire's efforts to cram invasive broken technology into every aspect of culture and society. I mean the vacuum industry. Whatever, i'm too smart for thinking about it.
-
This post did not contain any content.
It's depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
-
This post did not contain any content.
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
-
It's depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
No, no, no. You see, you're just too "out of the loop" to appreciate that it's a part of our lives now and you should just be quiet and use it. Apparently.
At least that's a few people's takes on here. So weird.
-
being anti-plastic is making me feel like i'm going insane. "you asked for a coffee to go and i grabbed a disposable cup." studies have proven its making people dumber. "i threw your leftovers in some cling film." its made from fossil fuels and leaves trash everywhere we look. "ill grab a bag at the register." it chokes rivers and beaches and then we act surprised. "ill print a cute label and call it recyclable." its spreading greenwashed nonsense. little arrows on stuff that still ends up in the landfill. "dont worry, it says compostable." only at some industrial facility youll never see. "i was unboxing a package" theres no way to verify where any of this ends up. burned, buried, or floating in the ocean. "the brand says advanced recycling." my work has an entire sustainability team and we still stock pallets of plastic water bottles and shrink wrapped everything. plastic cutlery. plastic wrap. bubble mailers. zip ties. everyone treats it as a novelty. every treats it as a mandatory part of life. am i the only one who sees it? am i paranoid? am i going insane? jesus fucking christ. if i have to hear one more "well at least" "but its convenient" "but you can" im about to lose it. i shouldnt have to jump through hoops to avoid the disposable default. have you no principles? no goddamn spine? am i the weird one here?
#ebb rambles #vent #i think #fuck plastics
im so goddamn tiredwrote last edited by [email protected]I wish companies were actually punished for their ecological footprint
plastic and AI
-
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
Also pro-child-slavery. Women should be locked in boxes all day. Billionaires get to pee in everyone's food at the table.
These are the counterpoints that make a robust debate!
-
No, it's not just you or unsat-and-strange. You're pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we've moved to now is mass adoption. And that's a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don't begrudge people from trying a new thing.
But if we aren't going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it's a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
wrote last edited by [email protected]the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.
GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
-
Also pro-child-slavery. Women should be locked in boxes all day. Billionaires get to pee in everyone's food at the table.
These are the counterpoints that make a robust debate!
don't want to start a debate. Nobody will change opinion, I would rather not waste time on this
-
the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.
GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
The folks I know at both OpenAI and Anthropic don’t share your belief.
Also, anecdotally, I’m only seeing more and more push for LLM use at work.
-
The folks I know at both OpenAI and Anthropic don’t share your belief.
Also, anecdotally, I’m only seeing more and more push for LLM use at work.
that's interesting in all honesty and I don't doubt you. all I know is my bank account has been getting bigger within the past few months due to new work from clients looking to fix their AI problems.