Is It Just Me?
-
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
You know it’s ok for everyone to dislike a thing if the thing is legitimately terrible, right? Like dissent for dissent’s sake is not objectively desirable.
-
It depends on how much you care that someone who needs or wants the alt text needs to know.
The accessibility advocates at WebAIM in the previous link don't seem to think a verbal depiction (which an algorithm could do) is adequate.
They emphasize what an algorithm does poorly: convey meaning in context.Their 1^st^ example indicates less is better: they don't dive into incidental details of the astronaut's dress, props, hand placement, but merely give her title & name.
They recommend
not include phrases like "image of ..." or "graphic of ...", etc
and calling it a screenshot is non-essential to context.
The hand holding a newspaper isn't meaningful in context, either.
The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn't), it's also non-essential to context.The best alternative text will depend on the context and intended content of the image.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
Geez for someone who ostensibly wants people to use alt text you’re super picky about it.
Good luck?
-
I gotta be honest, I'm neither pro nor anti AI myself. I don't use it as much as I used to these days, but when I do use it, it can be pretty fun and helpful. And I can't help but admire the AI images and videos, even if it is AI slop. (Maybe I'm an idiot for being very easily impressed/entertained by almost anything.)
Yes I know there's a bunch of problems with it (including environmental), but at the same time, I don't feel like I'm contributing to those problems, since I'm just one person, and there's so many other people using it anyway.
wrote last edited by [email protected]Well, yes, many people use it, you are right, and how can I say it... You can enjoy anything, even build houses from your hair, but when it is done for you, it is somehow boring, or something inside says: "yes, it is beautiful, but what is the point?" How can you value something that has no human effort or soul?
It's like you enjoy the slop like some guy eating pizza and drinking beer while watching a football match. It can be compared to fast food and tasty but unhealthy food.
-
I gotta be honest, I'm neither pro nor anti AI myself. I don't use it as much as I used to these days, but when I do use it, it can be pretty fun and helpful. And I can't help but admire the AI images and videos, even if it is AI slop. (Maybe I'm an idiot for being very easily impressed/entertained by almost anything.)
Yes I know there's a bunch of problems with it (including environmental), but at the same time, I don't feel like I'm contributing to those problems, since I'm just one person, and there's so many other people using it anyway.
There was a very common consensus that television wasn’t bad because it hasn’t affected me. Or advertising isn’t bad because “people can make up their own minds”. So we let it go.
That letting it go allowed Fox News and Talk radio and online nazis to destroy American democracy in six months. Yes it took a few decades to get up to speed, but here we are now.
That’s what the AI discussion is like to me.
-
There was a very common consensus that television wasn’t bad because it hasn’t affected me. Or advertising isn’t bad because “people can make up their own minds”. So we let it go.
That letting it go allowed Fox News and Talk radio and online nazis to destroy American democracy in six months. Yes it took a few decades to get up to speed, but here we are now.
That’s what the AI discussion is like to me.
In a world without justice, democracy is just a temporary distraction, creating the illusion that we have rights when in fact we only have them as long as it benefits some bad guys.
-
This post did not contain any content.wrote last edited by [email protected]
I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.
It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.
-
In a world without justice, democracy is just a temporary distraction, creating the illusion that we have rights when in fact we only have them as long as it benefits some bad guys.
Uh. Sure. Okay.
-
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
wrote last edited by [email protected]Well, you can support anything, for example even the Nazis who shot Jewish children.
The only thing that awaits you is the consequences, the rest is not important, it is your choice.
-
I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.
It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.
-
It's depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
Oh yes, soon we will live in techno-feudalism where we will return to our roots, so to speak. :3
And yes, you are damn right.
-
No, no, no. You see, you're just too "out of the loop" to appreciate that it's a part of our lives now and you should just be quiet and use it. Apparently.
At least that's a few people's takes on here. So weird.
wrote last edited by [email protected]At least that’s a few people’s takes on here. So weird.
It's just like enduring someone spitting in your face and keeping quiet because that's the norm now.
-
Uh. Sure. Okay.
Well, it's like a law of nature: if you don't fight for what's yours, others will take it from you.
-
Geez for someone who ostensibly wants people to use alt text you’re super picky about it.
Good luck?
wrote last edited by [email protected]If you want to fuss about common industry guidelines, then take it up with them.
It's even in standard guidelines:Note that it does not necessarily describe the visual characteristics of the image itself but must convey the same meaning as the image.
-
the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.
GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
At the risk of sounding like a tangent, LLMs' survival doesn't solely depend on consumer/business confidence. In the US, we are living in a fascist dictatorship. Fascism and fascists are inherently irrational. Trump, a fascist, wants to bring back coal despite the market natural phasing coal out.
The fascists want LLMs because they hate art and all things creative. So the fascists may very well choose to have the federal government invest in LLM companies. Like how they bought 10% of Intel's stock or how they want to build coal powered freedom cities.
So even if there are no business applications for LLM technology our fascist dictatorship may still try to impose LLM technology on all of us. Purely out of hate for us, art and life itself. edit: looks like I commented this under my comment the first time
-
Yeah thats definitely fair. Accessibility is important. It is unfortunate though that AI companies abuse accessibility and organization tags to train their LLMs.
See how Stable Diffusion porn uses danbooru tags, and situations like this:
https://youtube.com/watch?v=NEDFUjqA1s8
Decentralized media based communities have the rare ability to be able to hide their data from scraping.
wrote last edited by [email protected]I didn't have the patience to sit through 19 minutes of video, so I tried to read through the transcript.
Then I saw the stuttering & weird, verbose fuckery going on there.
Copilot, however, summarized the video, which revealed it was about deliberate obfuscation of subtitle files to attempt to thwart scrapers.This seems hostile to the user, and doesn't seem to work as intended, so I'm not sure what to think of it.
I know people who have trouble sequencing information and rely on transcripts.
Good accessibility benefits nondisabled users, too (an additional incentive for it).Not trying to be overly critical.
I'll have to look into danbooru tags: unfamiliar with those.
Thanks. -
You know it’s ok for everyone to dislike a thing if the thing is legitimately terrible, right? Like dissent for dissent’s sake is not objectively desirable.
It is not though
-
I didn't have the patience to sit through 19 minutes of video, so I tried to read through the transcript.
Then I saw the stuttering & weird, verbose fuckery going on there.
Copilot, however, summarized the video, which revealed it was about deliberate obfuscation of subtitle files to attempt to thwart scrapers.This seems hostile to the user, and doesn't seem to work as intended, so I'm not sure what to think of it.
I know people who have trouble sequencing information and rely on transcripts.
Good accessibility benefits nondisabled users, too (an additional incentive for it).Not trying to be overly critical.
I'll have to look into danbooru tags: unfamiliar with those.
Thanks.Patience and nuance is a rare virtue in 2025
-
I think you’re onto something where a lot of this AI mess is going to have to be fixed by actual engineers. If folks blindly copied from stackoverflow without any understanding, they’re gonna have a bad time and that seems equivalent to what we’re seeing here.
I think the AI hate is overblown and I tend to treat it more like a search engine than something that actually does my work for me. With how bad Google has gotten, some of these models have been a blessing.
My hope is that the models remain useful, but the bubble of treating them like a competent engineer bursts.
Agreed. I'm with you it should be treated as a basic tool not something that is used to actually create things which, again in my current line of work, is what many places have done. It's a fantastic rubber duck. I use it myself for that purpose or even for tasks that I can't be bothered with like creating README markdowns or commit messages or even setting up flakes and nix shells and stuff like that, creating base project structures so YOU can do the actual work and don't have to waste time setting things up.
The hate can be overblown but I can see where it's coming from purely because many companies have not utilized it as a tool but instead thought of it as a replacement for an individual.
-
Trying to compare the intelligence of a specialized, single purpose AI to an LLM is asinine, and shows you don't really know what you're talking about. Just like how it's asinine to equate a technology that pervades every facet of our lives, personal and professional, without our consent or control, to cars and guns.
So you've missed the point of what I was trying to say and proceeded to spout utter nonsense instead. Ok.
-
Why does profit matter? . . . but I am generally in favor of research and development of any technology. In most research it's hard to predict the future applications.
Answered your own question there.
If we weren't hard wired to justify existence in capital there wouldn't be so much occlusive hype around it.
Can't argue with that. It is almost entirely a cash grab that is astonishing in its overreach and astounding in its apparent failure.
wrote last edited by [email protected]More profit does not equal more research and development, since there's an awful lot of development happening despite the lack of profit.
I won't speculate on the failure of the technology because I don't know what was supposed to be achieved on what timeline.
But I'll agree the industry is ripe with shit marketing and overselling/misrepresentating its current capabilities, because of capitalism.