Something Bizarre Is Happening to People Who Use ChatGPT a Lot
-
It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.
Calling it a “stochastic parrot” is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.
It reads like the brainless drivel that corporate drones are forced to churn out, complete with meaningless fluff words. This is why the executives love AI, they read and expect that trash all the time and think it's suitable for everything.
Executives are perfectly content with what looks good at a cursory glance and don't care about what's actually good in practice because their job is to make themselves seem more important than they actually are.
I literally asked it to make the maximalist case against the idea that LLM are just autocomplete and that's exactly what it did.
The message before that did the opposite case.
-
Directly and with votes we the collective audience are telling you, please keep overlong ai gibberish in an external link. If that makes it get fewer clicks then perhaps it's not that interesting
I'll just cut it down to bite sized pieces and stop labelling it as AI generated
-
You didn't, I did
The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.
Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.
You said:
Make a list of the main points and for each explain why it will be worse when it is society wide
ChatGPT said:
Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:
Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control): Why it's worse at scale: If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance. Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”: Why it's worse at scale: A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion. Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached: Why it's worse at scale: This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level. People stress out over subtle changes in ChatGPT’s behavior: Why it's worse at scale: If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion. Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being: Why it's worse at scale: If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction. Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons: Why it's worse at scale: This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience. Prolonged usage increases risk of emotional dependence, regardless of how it's used: Why it's worse at scale: As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
Let me know if you want this formatted for a presentation, policy paper, or discussion.
That's really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.
Namely, the increased power of AI's big tech company owners over people that emotional dependence creates.
It's not as if these concepts aren't widely discussed online, everything from Meta's emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.
-
Depending on what I needed I remember using AltaVista, AskJeeves, Dogpile, and I feel like later on MetaCrawler or something like that (would search multiple search engines for you and ordered them scored based on platform and relevancy iirc?)
And Hastalavista if you wanted to find things that Altavista didn't.
-
I'm confused. If someone is in a place where they are choosing between dating a body pillow and suicide, then they have DEFINITELY made a wrong turn somewhere. They need some kind of assistance, and I hope they can get what they need, no matter what they choose.
I think my statement about "a wrong turn in life" is being interpreted too strongly; it wasn't intended to be such a strong and absolute statement of failure. Someone who's taken a wrong turn has simply made a mistake. It could be minor, it could be serious. I'm not saying their life is worthless. I've made a TON of wrong turns myself.
Trouble is your statement was in answer to @[email protected]'s comment that labeling lonely people as losers is problematic.
Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to "cause" it.
-
Trouble is your statement was in answer to @[email protected]'s comment that labeling lonely people as losers is problematic.
Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to "cause" it.
That's an excellent point that I wasn't considering. Thank you for explaining what I was missing.
-
That's really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.
Namely, the increased power of AI's big tech company owners over people that emotional dependence creates.
It's not as if these concepts aren't widely discussed online, everything from Meta's emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.
It's the 4th point
-
It's the 4th point
Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-"lobotomized".
-
Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-"lobotomized".
I think this power to shape the available knowledge, removing it, paywalling it, based on discrimination, leveraging it, and finally manipulating for advertising, state security and personnal reason is why it should be illegal to privately own any ML/ AI models of any kind. Drive them all underground and only let the open ones benefit from sales in public.
-
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI... we used to call OCR AI, now we know better.
LLM is a subset of ML, which is a subset of AI.
-