Is It Just Me?
-
Markov text generator. Thats all it is. Just made with billions in stolen wages.
"Information wants to be free..."
No! Not like that!! -
This post did not contain any content.
No, it's not just you or unsat-and-strange. You're pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we've moved to now is mass adoption. And that's a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don't begrudge people from trying a new thing.
But if we aren't going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it's a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
-
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I'm not advocating for it, I'm pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It's like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It's the industry as a whole exploiting consumer habits. AI users are no different.
Let's go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we're all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they'll turn to anything that makes life easier. But it shouldn't be this way and until we're no longer slaves we'll continue to make the choices that ease our burden, even if they're extremely harmful in the long run.
-
The data centers should not be safe for much longer. Especially once they use up the water of their small towns nearby
If they told me to ration water so a company could cool a machine, I'd become a fucking terrorist.
-
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I'm not advocating for it, I'm pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It's like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It's the industry as a whole exploiting consumer habits. AI users are no different.
We shouldn't accuse people of moral failings. That's inaccurate and obfuscates the actual systemic issues and incentives at play.
But people using this for unlicensed therapy are in danger. More often than not LLMs will parrot whatever you give in the prompt.
People have died from AI usage including unlicensed therapy. This would be like the factory farmed meat eating you.
https://www.yahoo.com/news/articles/woman-dies-suicide-using-ai-172040677.html
-
If plastic was released roughly two years ago you'd have a point.
If you're saying in 50 years we'll all be soaking in this bullshit called gen-AI and thinking it's normal, well - maybe, but that's going to be some bleak-ass shit.
Also you've got plastic in your gonads.
If you're saying in 50 years we'll all be soaking in this bullshit called gen-AI and thinking it's normal, well - maybe, but that's going to be some bleak-ass shit.
I'm almost certain gen AI will still be popular in 50 years. This is why I prefer people try to tackle some of the problems they see with AI instead of just hating on AI because of the problems it currently has. Don't get me wrong, pointing out the problems as you have is important - I just wouldn't jump to the conclusion that AI is a problem itself.
-
My hope is that the ai bubble/trend might have a silver lining overall.
Iām hoping that people start realizing that it is often confidently incorrect. That while it makes some tasks faster, a person will still need to vet the answers.
Hereās the stretch. My hope is that by questioning and researching to verify the answers ai is giving them, people start applying this same skepticism to their daily lives to help filter out all the noise and false information that is getting shoved down their throats every minute of every day.
So that the populace in general can become more resistant to the propaganda. AI would effectively be a vaccine to boost our herd immunity to BS.
Like I said. Itās a hope.
We should encourage people to do the vetting if they insist on using AI.
-
No, it's not just you or unsat-and-strange. You're pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we've moved to now is mass adoption. And that's a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don't begrudge people from trying a new thing.
But if we aren't going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it's a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
This would also crash the bubble and would slow down any of the most unethical for-profits.
-
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
LLM aren't good at math at all. They know the formulas, but they aren't built to do math. They are built to predict the next syllable in the stream of thought.
What are they good for? When you need to generate lots of things and it's faster to check after it rather than do it yourself.
Like you could've asked to generate a python app that solves your math problem, you would be able to doublecheck the correctness of the code and run it, knowing that the answer is predictably good.
-
This post did not contain any content.
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
I agree completely.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
"I was looking for my high school yearbook photo and Google Image didn't have it! Google Image search doesn't work and no one should use it!"
"I was trying to find a voicemail message from my late father on Spotify and I couldn't find it! Spotify is useless!"
"I went to the dollar store to shop for low cost health care coverage and they didn't have any! The dollar store is bad and no one should use it!"
-
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
I was referring to this in my comment:
Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.
āThereās growing recognition that the current patchwork approach to regulating AI isnāt working and will continue to worsen if we stay on this path,ā OpenAIās chief global affairs officer, Chris Lehane, wrote on LinkedIn. āWhile not someone Iād typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.ā
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn't provide compensation for the usage of that intellectual property.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn't impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We're still talking about the bubble bursting in the future so that clearly didn't happen.
-
This post did not contain any content.
I try use it to pitch ideas for writing (no prose because fuck almighty) to help fill in ideas or aspects I did not think about. But it just keeps coming up with shit I don't use and so I just use it for validation and encouragement.
I got a pretty good layout for a new season of Magic School Bus where Friz loses her mind and decides to be the history teacher.
-
The Luddites were right. Maybe we can learn a thing or two from them...
I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.
-
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
The term AI, when used by laymen, is a blanket term for the generative AI and LLMs that big tech is shoving down all our throats right now, not the highly specialized AIs used in medicine. So bringing up the COVID vaccine is largely a non-sequitur.
The rest of your comment is so full of false equivalencies that I'm not even gonna touch it.
-
Did you notice the alt text?
Yeah, and I noticed it didn't describe the image at all - unless one had already seen the image and knew what it was. So for visually impaired users (i.e. one of the main groups who would benefit from alt text) it is insufficient at best.
Griping over AI, however, isn't adding anything that isn't posted frequently around here
Specific to the OP the issue is those of us who know gen-AI is an enormous piece of shit with only downsides for things we care about like culture and learning, we might feel like we're going a little crazy in a culture that only seems to be able to share love for it in public places like work. Even public criticism of it has been limited to economic and ecological harms. I haven't seen that particular angle before very much, and as someone else posted here I felt recognized by it.
wrote last edited by [email protected]Yeah, and I noticed it didnāt describe the image at all
How would you state it over the phone?
Alt text is a succinct alternative that conveys (accurate/equivalent) meaning in context, much like reading a comment with an image to someone over the phone.
If you would have said that "Simpsons meme of an old man yelling at a cloud", then that would also suffice.
It doesn't need to go into elaborate detail.In those discussions, people often talk about having enough, losing their minds, it making people dumber, too.
I get it helps to feel recognized, so would it feel better to broaden the reach of that message for more recognition? -
Spelling errors? Thatās⦠unusual. Part of what makes ChatGPT so specious is that its output is usually immaculate in terms of language correctness, which superficially conceals the fact that itās completely bullshitting on the actual content.
FWIW, she asked it to make a complete info-graphic style poster with images and stuff so GPT created an image with text, not a document. Still asinine.
-
Yeah but LLMs don't train off of data automatically, you need a separate dedicated process for that, it won't happen from just using them. In that sense, companies can still use your data to train them in the background, even if you aren't directly using an LLM, or they can not train them even when you are using them. I guess in the latter case there is a bigger incentive for them to train them than otherwise, but to me it seems basically the same thing privacy wise.
If they're exposing their LLM to the public, there's a higher chance of it leaking training data to the public. You don't know what they trained with, but there's a chance it's customer data. Sure they may not train with anything, but why assume they don't? If they have an internal LLM that's of lesser concern, because that LLM would probably only show them data those employees already have access to.
-
The term AI, when used by laymen, is a blanket term for the generative AI and LLMs that big tech is shoving down all our throats right now, not the highly specialized AIs used in medicine. So bringing up the COVID vaccine is largely a non-sequitur.
The rest of your comment is so full of false equivalencies that I'm not even gonna touch it.
The highly specialized "ai" that was used to make the COVID vaccine is probably significantly stupider than chatgpt two years ago.
Nothing I said was a false equivalency, though I'm pretty sure you don't even know what a false equivalent is.