Something Bizarre Is Happening to People Who Use ChatGPT a Lot
-
Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let's not think about that either. AI Bad!
This is a salient point that's well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It's super easy to call out a bad research study and have it retracted. But you can't just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they're synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.
-
They're clearly under the control of Big Train, Loom Lobbyists and the Global Gutenberg Printing Press Conspiracy.
-
The quote was originally on news and journalists.
-
And sunshine hurts.
Said the vampire from Transylvania.
-
I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things
Ahem. Weren't there an election recently, in some big country, with uncanny similitude with that?
Yeah. Got me there.
-
This post did not contain any content.
I couldn’t be bothered to read it, so I got CharGPT to summarise it. Apparently there’s nothing to worry about.
-
Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup "Eureka Labs - we are building a new kind of school that is AI native") make a tweet defining the term:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
People ignore the "It's not too bad for throwaway weekend projects", and try to use this style of coding to create "production-grade" code... Lets just say it's not going well.
source (xcancel link)
The amount of damage a newbie programmer without a tight leash can do to a code base/product is immense. Once something is out in production, that is something you have to deal with forever. That temporary fix they push is going to be still used in a decade and if you break it, now you have to explain to the customer why the thing that's been working for them for years is now broken and what you plan to do to remedy the situation.
-
Do you guys remember when internet was the thing and everybody was like: "Look those dumb fucks just putting everything online" and now is: "Look this weird motherfucker so not post anything online"
Remember when people used to say and believe "Don't believe everything you read on the internet?"
I miss those days.
-
This post did not contain any content.
Not a lot of meat on this article, but yeah, I think it's pretty obvious that those who seek automated tools to define their own thoughts and feelings become dependent. If one is so incapable of mapping out ones thoughts and putting them to written word, its natural they'd seek ease and comfort with the "good enough" (fucking shitty as hell) output of a bot.
-
This post did not contain any content.
-
This makes a lot of sense because as we have been seeing over the last decades or so is that digital only socialization isn't a replacement for in person socialization. Increased social media usage shows increased loneliness not a decrease. It makes sense that something even more fake like ChatGPT would make it worse.
I don't want to sound like a luddite but overly relying on digital communications for all interactions is a poor substitute for in person interactions. I know I have to prioritize seeing people in the real world because I work from home and spending time on Lemmy during the day doesn't fulfill.
In person socialization? Is that like VR chat?
-
I remember when the Internet was a thing people went on and/or visited/surfed, but not something you'd imagine having 247.
I was there from the start, you must of never BBS'd or IRC'd - shit was amazing in the early days.
I mean honestly nothing has really changed - we are still at our terminals looking at text. Only real innovation has been inline pics, videos and audio. 30+ years ago one had to click a link to see that stuff
-
Presuming you're writing in Python: Check out https://docs.astral.sh/ruff/
It's an all-in-one tool that combines several older (pre-existing) tools. Very fast, very cool.
-
If you actually read the article Im 0retty sure the bizzarre thing is really these people using a 'tool' forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a 'friend'.
You never viewed a tool as a friend? Pretty sure there are some guys that like their cars more than most friends. Bonding with objects isn't that weird, especially one that can talk to you like it's human.
-
This post did not contain any content.
People addicted to tech omg who could've guessed. Shocked I tell you.
-
I am so happy God made me a Luddite
Yeah look at all this technology you can't use! It's so empowering.
-
I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things. I got an AI search response just yesterday that dramatically understated an issue by citing an unscientific ideologically based website with high interest and reason to minimize said issue. The actual studies showed a 6x difference. It was blatant AF, and I can’t understand why anyone would rely on such a system for reliable, objective information or responses. I have noted several incorrect AI responses to queries, and people mindlessly citing said response without verifying the data or its source. People gonna get stupider, faster.
That's why I only use it as a starting point. It spits out "keywords" and a fuzzy gist of what I need, then I can verify or experiment on my own. It's just a good place to start or a reminder of things you once knew.
-
This post did not contain any content.
That is peak clickbait, bravo.
-
Yeah look at all this technology you can't use! It's so empowering.
Can, and opt not to. Big difference. I'm sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own
Same with many aspects of modern technology. Like, I'm sure it's very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else's computer turns off, I'd like to keep control over my things
-
I couldn’t be bothered to read it, so I got CharGPT to summarise it. Apparently there’s nothing to worry about.
You didn't, I did
The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.
Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.
You said:
Make a list of the main points and for each explain why it will be worse when it is society wide
ChatGPT said:
Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:
Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control): Why it's worse at scale: If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance. Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”: Why it's worse at scale: A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion. Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached: Why it's worse at scale: This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level. People stress out over subtle changes in ChatGPT’s behavior: Why it's worse at scale: If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion. Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being: Why it's worse at scale: If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction. Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons: Why it's worse at scale: This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience. Prolonged usage increases risk of emotional dependence, regardless of how it's used: Why it's worse at scale: As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
Let me know if you want this formatted for a presentation, policy paper, or discussion.