Is It Just Me?
-
I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.
It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.
-
It's depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
Oh yes, soon we will live in techno-feudalism where we will return to our roots, so to speak. :3
And yes, you are damn right.
-
No, no, no. You see, you're just too "out of the loop" to appreciate that it's a part of our lives now and you should just be quiet and use it. Apparently.
At least that's a few people's takes on here. So weird.
wrote last edited by [email protected]At least that’s a few people’s takes on here. So weird.
It's just like enduring someone spitting in your face and keeping quiet because that's the norm now.
-
Uh. Sure. Okay.
Well, it's like a law of nature: if you don't fight for what's yours, others will take it from you.
-
Geez for someone who ostensibly wants people to use alt text you’re super picky about it.
Good luck?
wrote last edited by [email protected]If you want to fuss about common industry guidelines, then take it up with them.
It's even in standard guidelines:Note that it does not necessarily describe the visual characteristics of the image itself but must convey the same meaning as the image.
-
the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.
GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
At the risk of sounding like a tangent, LLMs' survival doesn't solely depend on consumer/business confidence. In the US, we are living in a fascist dictatorship. Fascism and fascists are inherently irrational. Trump, a fascist, wants to bring back coal despite the market natural phasing coal out.
The fascists want LLMs because they hate art and all things creative. So the fascists may very well choose to have the federal government invest in LLM companies. Like how they bought 10% of Intel's stock or how they want to build coal powered freedom cities.
So even if there are no business applications for LLM technology our fascist dictatorship may still try to impose LLM technology on all of us. Purely out of hate for us, art and life itself. edit: looks like I commented this under my comment the first time
-
Yeah thats definitely fair. Accessibility is important. It is unfortunate though that AI companies abuse accessibility and organization tags to train their LLMs.
See how Stable Diffusion porn uses danbooru tags, and situations like this:
https://youtube.com/watch?v=NEDFUjqA1s8
Decentralized media based communities have the rare ability to be able to hide their data from scraping.
wrote last edited by [email protected]I didn't have the patience to sit through 19 minutes of video, so I tried to read through the transcript.
Then I saw the stuttering & weird, verbose fuckery going on there.
Copilot, however, summarized the video, which revealed it was about deliberate obfuscation of subtitle files to attempt to thwart scrapers.This seems hostile to the user, and doesn't seem to work as intended, so I'm not sure what to think of it.
I know people who have trouble sequencing information and rely on transcripts.
Good accessibility benefits nondisabled users, too (an additional incentive for it).Not trying to be overly critical.
I'll have to look into danbooru tags: unfamiliar with those.
Thanks. -
You know it’s ok for everyone to dislike a thing if the thing is legitimately terrible, right? Like dissent for dissent’s sake is not objectively desirable.
It is not though
-
I didn't have the patience to sit through 19 minutes of video, so I tried to read through the transcript.
Then I saw the stuttering & weird, verbose fuckery going on there.
Copilot, however, summarized the video, which revealed it was about deliberate obfuscation of subtitle files to attempt to thwart scrapers.This seems hostile to the user, and doesn't seem to work as intended, so I'm not sure what to think of it.
I know people who have trouble sequencing information and rely on transcripts.
Good accessibility benefits nondisabled users, too (an additional incentive for it).Not trying to be overly critical.
I'll have to look into danbooru tags: unfamiliar with those.
Thanks.Patience and nuance is a rare virtue in 2025
-
I think you’re onto something where a lot of this AI mess is going to have to be fixed by actual engineers. If folks blindly copied from stackoverflow without any understanding, they’re gonna have a bad time and that seems equivalent to what we’re seeing here.
I think the AI hate is overblown and I tend to treat it more like a search engine than something that actually does my work for me. With how bad Google has gotten, some of these models have been a blessing.
My hope is that the models remain useful, but the bubble of treating them like a competent engineer bursts.
Agreed. I'm with you it should be treated as a basic tool not something that is used to actually create things which, again in my current line of work, is what many places have done. It's a fantastic rubber duck. I use it myself for that purpose or even for tasks that I can't be bothered with like creating README markdowns or commit messages or even setting up flakes and nix shells and stuff like that, creating base project structures so YOU can do the actual work and don't have to waste time setting things up.
The hate can be overblown but I can see where it's coming from purely because many companies have not utilized it as a tool but instead thought of it as a replacement for an individual.
-
Trying to compare the intelligence of a specialized, single purpose AI to an LLM is asinine, and shows you don't really know what you're talking about. Just like how it's asinine to equate a technology that pervades every facet of our lives, personal and professional, without our consent or control, to cars and guns.
So you've missed the point of what I was trying to say and proceeded to spout utter nonsense instead. Ok.
-
Why does profit matter? . . . but I am generally in favor of research and development of any technology. In most research it's hard to predict the future applications.
Answered your own question there.
If we weren't hard wired to justify existence in capital there wouldn't be so much occlusive hype around it.
Can't argue with that. It is almost entirely a cash grab that is astonishing in its overreach and astounding in its apparent failure.
wrote last edited by [email protected]More profit does not equal more research and development, since there's an awful lot of development happening despite the lack of profit.
I won't speculate on the failure of the technology because I don't know what was supposed to be achieved on what timeline.
But I'll agree the industry is ripe with shit marketing and overselling/misrepresentating its current capabilities, because of capitalism.
-
It's a tool being used by humans.
Nailed it.
It's not making anyone dumber or smarter.
Absolutely incorrect.
I'm so tired of this anti ai bullshit.
That's what OP says too, only the other way around.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
Machine Learning, or Data Science, is not what "anti-AI" is about. You can acknowledge that or keep being confused.
These are just tools they're as good and as bad as the people using them.
In a vacuum. We don't live in a vacuum. (no not the thing that you push around the house to clean the carpet. That's also a tool. And the vacuum industry didn't blow three hundred billion dollars on a vacuum concept that sort of works sometimes.)
So yes, it is just you and a select few smooth brains that can't see past their own bias.
Yeah they're so unfair to the ubiquitous tech companies that dominate their waking lives. I too support the unregulated billionaire's efforts to cram invasive broken technology into every aspect of culture and society. I mean the vacuum industry. Whatever, i'm too smart for thinking about it.
Ok you've clearly lost the plot.
Let's try again. You use the internet right? Well, the internet is used for crimes, it makes people dumber, ever watch any flat earth videos? You should boycott the internet so you're not part of it that way you can remain morally in the clear.
And you not liking commercial llms vs. Machine learning for scientific application only makes you a hypocrite.
-
This post did not contain any content.
I see this sentiment a lot. No way "youre the only one."
I feel like im the only one. No one in my life uses it. My work is not eligible to have it implemented in anyway. This whole ai movement seems to be happening around me, and i have nothing more than new articles and memes that are telling me its happening. It serious doesnt impact me at all, and i wonder how others lives are crumbling
-
So you've missed the point of what I was trying to say and proceeded to spout utter nonsense instead. Ok.
If I've missed the point, it's because you've not made it clear. Please point out what nonsense I've spouted.
-
It is not though
AI in the context of late-stage capitalism and the beginnings of global ecological collapse is terrible for everyone except for the people who own it.
-
Patience and nuance is a rare virtue in 2025
I'm not sure this is so much virtues becoming rarer as inconvenient demands emerging: a video that could have been an article is a problem of the modern age.
Articles can be read quickly & processed structurally by jumping around sections.
Videos, however, can be agonizing, because they resist that sort of processing.
Transcripts can alleviate the problem somewhat, but obfuscating them undoes that.
And we've got things to do. -
People hate LLMs because they feel left behind
HAHAHAh! Wow.
Is that not true?
-
The energy use is quite tiny
It literally is not. If you're talking about interacting with trained models, then sure but that's a different thing altogether. That's not what the energy use problem is.
Meh all of it is very unconvincing.
Maybe you haven't taken the time to read the articles. Or perhaps climate, ethics, and economic disaster don't mean very much to you. Which - maybe that's the case, but you also can't say they're not huge problems. You can say "i don't care" but that's different than "these facts aren't real."
wrote last edited by [email protected]Source? Because all datacenter and compute never even reached double digit percentages of our overal energy use. Thats including crypto currency too. To me thats not a serious number to start panicking yet, it's energy use as any other and we can totally handle it.
You seem to be missing the point that source of energy is what's creating the problem not the energy use itself. We'll never need less energy and to assume that the markets would ever just step back here is invredibly childish. You're gonna whine about every new invention that uses electricity now?
-
facts tend to be unconvincing when you consider fantasies like "LLMs are being powered by green energy" a reality.
Why? Hiding datacenter energy use is much harder than anything else so it's much easier to regulate.