Is It Just Me?
-
No, it's not just you or unsat-and-strange. You're pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we've moved to now is mass adoption. And that's a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don't begrudge people from trying a new thing.
But if we aren't going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it's a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
wrote last edited by [email protected]the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.
GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
-
Also pro-child-slavery. Women should be locked in boxes all day. Billionaires get to pee in everyone's food at the table.
These are the counterpoints that make a robust debate!
don't want to start a debate. Nobody will change opinion, I would rather not waste time on this
-
the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.
GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
The folks I know at both OpenAI and Anthropic don’t share your belief.
Also, anecdotally, I’m only seeing more and more push for LLM use at work.
-
The folks I know at both OpenAI and Anthropic don’t share your belief.
Also, anecdotally, I’m only seeing more and more push for LLM use at work.
that's interesting in all honesty and I don't doubt you. all I know is my bank account has been getting bigger within the past few months due to new work from clients looking to fix their AI problems.
-
How would you state it over the phone?
"A screenshot of The Simpsons showing a hand holding a newspaper article featuring a picture of Grandpa Simpson shaking his fist at the sky and scowling, with the headline 'Old Man Yells At Clouds'"
It doesn't need to go into elaborate detail.
It depends on how much you care that someone who needs or wants the alt text needs to know.
so would it feel better to broaden the reach of that message for more recognition?
absolutely. And, ironically, one of the possible use cases of AI where it might-sort-of-kinda-work-okay-to-help-although-it-needs-work-because-it's-still-kind-of-sucky.
It depends on how much you care that someone who needs or wants the alt text needs to know.
The accessibility advocates at WebAIM in the previous link don't seem to think a verbal depiction (which an algorithm could do) is adequate.
They emphasize what an algorithm does poorly: convey meaning in context.Their 1^st^ example indicates less is better: they don't dive into incidental details of the astronaut's dress, props, hand placement, but merely give her title & name.
They recommend
not include phrases like "image of ..." or "graphic of ...", etc
and calling it a screenshot is non-essential to context.
The hand holding a newspaper isn't meaningful in context, either.
The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn't), it's also non-essential to context.The best alternative text will depend on the context and intended content of the image.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
-
that's interesting in all honesty and I don't doubt you. all I know is my bank account has been getting bigger within the past few months due to new work from clients looking to fix their AI problems.
I think you’re onto something where a lot of this AI mess is going to have to be fixed by actual engineers. If folks blindly copied from stackoverflow without any understanding, they’re gonna have a bad time and that seems equivalent to what we’re seeing here.
I think the AI hate is overblown and I tend to treat it more like a search engine than something that actually does my work for me. With how bad Google has gotten, some of these models have been a blessing.
My hope is that the models remain useful, but the bubble of treating them like a competent engineer bursts.
-
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
I'm not really pro or anti. I use it with appropriate skepticism for certain types of things. I can see how it is extremely problematic in various ways. I would prefer it didn't exist but it does provide utility and it's not going away. I find a lot of the anti crowd to often be kind of silly and childish in a similar way as the extremists in the pro crowd, you can tell they really want to believe what they believe and critical thinking doesn't seem to come into it much.
-
You don't think the disabled use technology?
Or that search engine optimization existed before LLMs?
Or that text sticks around when images break?Lack of accessibility wouldn't stop LLMs: it could probably process images into text the hard way & waste more energy in the process.
That'd be great, right?-
A hyphen isn't a quotation dash.
-
Are we playing the AI game? Let’s pretend we’re AI.
Here's some fun punctuation:‒−–—―…:
Beep bip boop.
wrote last edited by [email protected]Yeah thats definitely fair. Accessibility is important. It is unfortunate though that AI companies abuse accessibility and organization tags to train their LLMs.
See how Stable Diffusion porn uses danbooru tags, and situations like this:
https://youtube.com/watch?v=NEDFUjqA1s8
Decentralized media based communities have the rare ability to be able to hide their data from scraping.
-
-
don't want to start a debate. Nobody will change opinion, I would rather not waste time on this
You want other opinions on Lemmy but don't want to start a debate? That is a truly stupid position.
If you want to explore the nuances of pro/anti ai, go for it. I'm here with you. The person above made a point that the genie is out of the bottle. Maybe we could discuss focusing on making ai ethical rather that trying to catch smoke with our hands. Let's talk about that. But bringing up a vague concept because you want diverse opinions but refusing to engage? That is seriously dumb.
-
This post did not contain any content.
I gotta be honest, I'm neither pro nor anti AI myself. I don't use it as much as I used to these days, but when I do use it, it can be pretty fun and helpful. And I can't help but admire the AI images and videos, even if it is AI slop. (Maybe I'm an idiot for being very easily impressed/entertained by almost anything.)
Yes I know there's a bunch of problems with it (including environmental), but at the same time, I don't feel like I'm contributing to those problems, since I'm just one person, and there's so many other people using it anyway.
-
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
You know it’s ok for everyone to dislike a thing if the thing is legitimately terrible, right? Like dissent for dissent’s sake is not objectively desirable.
-
It depends on how much you care that someone who needs or wants the alt text needs to know.
The accessibility advocates at WebAIM in the previous link don't seem to think a verbal depiction (which an algorithm could do) is adequate.
They emphasize what an algorithm does poorly: convey meaning in context.Their 1^st^ example indicates less is better: they don't dive into incidental details of the astronaut's dress, props, hand placement, but merely give her title & name.
They recommend
not include phrases like "image of ..." or "graphic of ...", etc
and calling it a screenshot is non-essential to context.
The hand holding a newspaper isn't meaningful in context, either.
The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn't), it's also non-essential to context.The best alternative text will depend on the context and intended content of the image.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
Geez for someone who ostensibly wants people to use alt text you’re super picky about it.
Good luck?
-
I gotta be honest, I'm neither pro nor anti AI myself. I don't use it as much as I used to these days, but when I do use it, it can be pretty fun and helpful. And I can't help but admire the AI images and videos, even if it is AI slop. (Maybe I'm an idiot for being very easily impressed/entertained by almost anything.)
Yes I know there's a bunch of problems with it (including environmental), but at the same time, I don't feel like I'm contributing to those problems, since I'm just one person, and there's so many other people using it anyway.
wrote last edited by [email protected]Well, yes, many people use it, you are right, and how can I say it... You can enjoy anything, even build houses from your hair, but when it is done for you, it is somehow boring, or something inside says: "yes, it is beautiful, but what is the point?" How can you value something that has no human effort or soul?
It's like you enjoy the slop like some guy eating pizza and drinking beer while watching a football match. It can be compared to fast food and tasty but unhealthy food.
-
I gotta be honest, I'm neither pro nor anti AI myself. I don't use it as much as I used to these days, but when I do use it, it can be pretty fun and helpful. And I can't help but admire the AI images and videos, even if it is AI slop. (Maybe I'm an idiot for being very easily impressed/entertained by almost anything.)
Yes I know there's a bunch of problems with it (including environmental), but at the same time, I don't feel like I'm contributing to those problems, since I'm just one person, and there's so many other people using it anyway.
There was a very common consensus that television wasn’t bad because it hasn’t affected me. Or advertising isn’t bad because “people can make up their own minds”. So we let it go.
That letting it go allowed Fox News and Talk radio and online nazis to destroy American democracy in six months. Yes it took a few decades to get up to speed, but here we are now.
That’s what the AI discussion is like to me.
-
There was a very common consensus that television wasn’t bad because it hasn’t affected me. Or advertising isn’t bad because “people can make up their own minds”. So we let it go.
That letting it go allowed Fox News and Talk radio and online nazis to destroy American democracy in six months. Yes it took a few decades to get up to speed, but here we are now.
That’s what the AI discussion is like to me.
In a world without justice, democracy is just a temporary distraction, creating the illusion that we have rights when in fact we only have them as long as it benefits some bad guys.
-
This post did not contain any content.wrote last edited by [email protected]
I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.
It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.
-
In a world without justice, democracy is just a temporary distraction, creating the illusion that we have rights when in fact we only have them as long as it benefits some bad guys.
Uh. Sure. Okay.
-
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
wrote last edited by [email protected]Well, you can support anything, for example even the Nazis who shot Jewish children.
The only thing that awaits you is the consequences, the rest is not important, it is your choice.
-
I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.
It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.
-
It's depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
Oh yes, soon we will live in techno-feudalism where we will return to our roots, so to speak. :3
And yes, you are damn right.