Is It Just Me?
-
You’re right, every new technology displaces some jobs, but AI is on a vastly larger scale (as was industrial production technology).
As to your last question, it’s because the people controlling the narrative don’t want to pay for unemployment benefits, industrial retraining, or anything else that doesn’t immediately make them more money.
You’re right, every new technology displaces some jobs, but AI is on a vastly larger scale (as was industrial production technology).
Famously, ye olde Boomer could just walk into a factory and get a job. None of these jobs exist anymore, mostly because of automation. Of course, none of those people wrote for a living, or had access to an audience of millions. I doubt that AI will displace jobs on a vastly larger scale but it is certainly communicated on a vastly larger scale.
If you think about all these jobs that might be displaced by AI, how many of them existed in the 1950ies? Many jobs, like web designer, are new. Either these new jobs reflect the displacement of old jobs, or you need a lot more people to do more jobs. Granted, global population has grown a lot, but that's not where these new jobs came from, right?
As to your last question, it’s because the people controlling the narrative don’t want to pay for unemployment benefits, industrial retraining, or anything else that doesn’t immediately make them more money.
Yes, the narrative is all about more money for (intellectual) property owners. That doesn't make a lot of sense if people are worried about losing their jobs.
-
The reason AI is wrong so often is because it's not programmed to give you the right answer. It's programmed to give you the most pervasive one.
LLMs are being fed by Reddit and other forums that are ostensibly about humans giving other humans answers to questions.
But have you been on those forums? It's a dozen different answers for every question. The reality is that we average humans don't know shit and we're just basing our answers on our own experiences. We aren't experts. We're not necessarily dumb, but unless we've studied, our knowledge is entirely anecdotal, and we all go into forums to help others with a similar problem by sharing our answer to it.
So the LLM takes all of that data and in essence thinks that the most popular, most mentioned, most upvoted answer to any given question must be the de facto correct one. It literally has no other way to judge; it's not smart enough to cross reference itself or look up sources.
wrote last edited by [email protected]It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources
I think that is it's biggest limitation.
Like AI basically crowd sourcing information isn't really the worst thing, crowd sourced knowledge tends to be fairly decent. People treating it as if it's an authoritative source like they looked it up in the encyclopedia or asked an expert is a big problem though.
Ideally it would be more selective about the 'crowds' it gathers data from. Like science questions should be sourced from scientists. Preferably experts in the field that the question is about.
Like Wikipedia (at least for now) is 'crowd- sourced', but individual pages are usually maintained by people who know a lot about the subject. That's why it's more accurate than a 'normal' encyclopedia. Though of course it's not fool proof or tamper proof by any definition.
If we taught AI how to be 'Media Literate' and gave it the ability to double check it's data with reliable sources- it would be a lot more useful.
most upvoted answer
This is the other problem. You basically have 4 types of redditors.
-
People who use the karma system correctly, that is to say they upvote things that contribute to the conversation. Even if you think it is 'wrong' or you disagree with it, if it's something that adds to the discussion, you are supposed to upvote it.
-
People who treat it as "I agree/ I disagree" buttons.
-
People who treat it as "I like this/ I hate this buttons.
-
Id say the majority of the people probably do some combination of the above.
So more than half the time people aren't upvoting things because they think they are correct. If LLM models are treating 'karma' as a "This is correct" metric- that's a big problem.
The other bad problem is people who really should know better- tech bros and CEO's going all in on AI when it's WAY to early to do that. As you point out, it's not even really intelligent yet- it just parrots 'common' knowledge.
-
-
Spelling errors? That’s… unusual. Part of what makes ChatGPT so specious is that its output is usually immaculate in terms of language correctness, which superficially conceals the fact that it’s completely bullshitting on the actual content.
The user above mentioned informational poster so I'm going to assume it was generated as an image. And those have spelling mistakes.
Can't even generate image and text separately smh. People are indeed getting dumber.
-
This post did not contain any content.
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I'm not advocating for it, I'm pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It's like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It's the industry as a whole exploiting consumer habits. AI users are no different.
-
It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources
I think that is it's biggest limitation.
Like AI basically crowd sourcing information isn't really the worst thing, crowd sourced knowledge tends to be fairly decent. People treating it as if it's an authoritative source like they looked it up in the encyclopedia or asked an expert is a big problem though.
Ideally it would be more selective about the 'crowds' it gathers data from. Like science questions should be sourced from scientists. Preferably experts in the field that the question is about.
Like Wikipedia (at least for now) is 'crowd- sourced', but individual pages are usually maintained by people who know a lot about the subject. That's why it's more accurate than a 'normal' encyclopedia. Though of course it's not fool proof or tamper proof by any definition.
If we taught AI how to be 'Media Literate' and gave it the ability to double check it's data with reliable sources- it would be a lot more useful.
most upvoted answer
This is the other problem. You basically have 4 types of redditors.
-
People who use the karma system correctly, that is to say they upvote things that contribute to the conversation. Even if you think it is 'wrong' or you disagree with it, if it's something that adds to the discussion, you are supposed to upvote it.
-
People who treat it as "I agree/ I disagree" buttons.
-
People who treat it as "I like this/ I hate this buttons.
-
Id say the majority of the people probably do some combination of the above.
So more than half the time people aren't upvoting things because they think they are correct. If LLM models are treating 'karma' as a "This is correct" metric- that's a big problem.
The other bad problem is people who really should know better- tech bros and CEO's going all in on AI when it's WAY to early to do that. As you point out, it's not even really intelligent yet- it just parrots 'common' knowledge.
AI should never be used to create anything in Wikipedia. But theoretically, an open source LLM trained solely on wikipedia would actually be kind useful to ask quick questions to.
-
-
Because the alternative for me is googling the question with "reddit" added at the end half of the time. I still do that alot. For more complicated or serious problems/questions, I've set it to only use search function and navigate scientific sites like ncbi and pubmed while utilizing deep think. It then gives me the sources, I randomly tend to cross-check the relevant information, but so far I personally haven't noticed any errors. You gotta realize how much time this saves.
When it comes to data privacy, I honestly don't see the potential dangers in the data I submit to OpenAI, but this is of course different to everyone else. I don't submit any personal info or talk about my life. It's a tool.
If it saves time but you still have to double check its answers, does it really save time? At least many reddit comments call out their own uncertainty or link to better resources, I can't trust a single thing AI outputs so I just ignore it as much as possible.
-
Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.
And people agree with me implicitly and tell me they've seen the same. But then don't hesitate to turn to AI on subjects they aren't experts in for "quick answers". These are not stupid people either. I just don't understand.
Uses for this current wave of AI: converting machine language to human language. Converting human language to machine language. Sentiment analysis. Summarizing text.
People have way over invested in one of the least functional parts of what it can do because it's the part that looks the most "magic" if you don't know what it's doing.
The most helpful and least used way of using them is to identify what information the user is looking for and then to point them to resources they can use to find out for themselves, maybe with a description of which resource might be best depending on what part of the question they're answering.
It's easy to be wrong when you're answering a question, and a lot harder when you hand someone a book and say you think the answer is in chapter four. -
My pet peeve: "here's what ChatGPT said..."
No.
Stop.
If I'd wanted to know what the Large Lying Machine said, I would've asked it.
Hammer time.
-
So your comment isn't about AI at all, but rather the "bubble" mentality of online discussion spaces?
Exactly. There's some value to AI,but ai would say it's 85% hype and inflated bullshit. People though spend a ton of calories complaining about AI and how inescapable it is, while I find it more difficult to escape people complaining about AI than AI itself.
Yes it hallucinates, yes it makes you dumber, yes it's actually a blanket term that people really say when they mean LLMs, yes some people do get some utility from it. But mostly, jesus, your opinion has been expressed on the Internet already, STFU.
-
This post did not contain any content.
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
-
If plastic was released roughly two years ago you'd have a point.
If you're saying in 50 years we'll all be soaking in this bullshit called gen-AI and thinking it's normal, well - maybe, but that's going to be some bleak-ass shit.
Also you've got plastic in your gonads.
On the bright side we have Cyberpunk to give us a tutorial on how to survive the AI dystopia. Have you started picking your implants yet?
-
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
Exactly
-
Got them —s, tho
Should be ellipses.
-
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what’s the point of using it in the first place?
pfft that ecosystem isn't going to fuck itself, now, is it?
-
This post did not contain any content.
The Luddites were right. Maybe we can learn a thing or two from them...
-
The Luddites were right. Maybe we can learn a thing or two from them...
The data centers should not be safe for much longer. Especially once they use up the water of their small towns nearby
-
It literally has no other way to judge
It literally does NOT judge. It cannot reason. It does not know what "words" are. It is an enormous rainbow table of sentence probability that does nothing useful except fool people and provide cover for capitalists to extract more profit.
But apparently, according to some on here, "that's the way it is, get used to it." FUCK no.
Markov text generator. Thats all it is. Just made with billions in stolen wages.
-
A lot of people also mix generative AI with predictive. Like they will mention the hurricane predictor or cancer cell finder AI as a "good use case for chatgpt."
Hard to blame the people, when the media have been calling everything AI these days
-
You are not correct about the energy use of prompts. They are not very energy intensive at all. Training the AI, however, is breaking the power grid.
I'm pretty sure it's a product of scale, but also, GPT5 is markedly worse. I heard estimates of 40 watt hours for a single medium length response. Napkin math says my motorcycle can travel
about kilometer per single medium length response of GPT5. Now multiply that by how many people are using AI (anyone going online these days), now multiply that by how many times a day each user causes a prompt. Now multiply that by 365 and we have how much power they're using in a year. -
uhm no I'm pretty sure op wouldn't approve judging by the:
"but you can-" I'm gonna lose it
I did not claim that the OP was saying it's sometimes useful.