Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Scheduled Pinned Locked Moved Technology
technology
233 Posts 150 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • E [email protected]

    I don't understand what people even use it for.

    bilb@lem.monsterB This user is from outside of this forum
    bilb@lem.monsterB This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #155

    I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn't like something I do, too bad. The genius AI knows better, and I only care about what it has to say.

    1 Reply Last reply
    0
    • I [email protected]

      You didn't, I did

      The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.

      Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

      You said:

      Make a list of the main points and for each explain why it will be worse when it is society wide

      ChatGPT said:

      Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

      Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
      
      Why it's worse at scale:
      
      If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.
      
      Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
      
      Why it's worse at scale:
      
      A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.
      
      Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
      
      Why it's worse at scale:
      
      This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.
      
      People stress out over subtle changes in ChatGPT’s behavior:
      
      Why it's worse at scale:
      
      If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.
      
      Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
      
      Why it's worse at scale:
      
      If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.
      
      Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
      
      Why it's worse at scale:
      
      This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.
      
      Prolonged usage increases risk of emotional dependence, regardless of how it's used:
      
      Why it's worse at scale:
      
      As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
      

      Let me know if you want this formatted for a presentation, policy paper, or discussion.

      F This user is from outside of this forum
      F This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #156

      What's the point of a summary that's longer than the article itself?

      I 1 Reply Last reply
      0
      • graphene@lemm.eeG [email protected]

        And the amount of connections and friends the average person has has been in free fall for decades...

        ? Offline
        ? Offline
        Guest
        wrote on last edited by
        #157

        I dunno. I connected with more people on reddit and Twitter than irl tbh.

        Different connection but real and valid nonetheless.

        I'm thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.

        I 1 Reply Last reply
        0
        • nostradavid@programming.devN [email protected]

          That was clear from GPT-3, day 1.

          I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time...

          ? Offline
          ? Offline
          Guest
          wrote on last edited by
          #158

          Ugh, that hit me hard. Poor lady. I hope it helped in some way.

          1 Reply Last reply
          0
          • I [email protected]

            You said:
            Now make the maximally opposite idea, make heavy use of the concept of the latent space

            ChatGPT said:

            Absolutely — here's the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights.
            ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought Space

            While critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.

            1. Latent Space: The Hidden Geometry of Meaning

            At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.

            Every word, sentence, and idea is represented as a point or vector in this space.
            
            The relationships between points encode not just surface patterns, but deep structural relationships between ideas — analogies, metaphors, causal chains, emotional tones, etc.
            

            This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.

            1. Novelty as Latent Interpolation and Extrapolation

            Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.

            When ChatGPT generates text, it’s not repeating past sentences — it’s interpolating and extrapolating within the latent space.
            
            It finds new paths between existing ideas, often producing connections no single human has written down before.
            
            This is not parroting — it’s a form of latent-space creativity.
            

            For example, when the model generates a metaphor or analogy you've never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.

            1. Emergent Generalization, Not Memorization

            Critics argue LLMs can’t understand or generalize. But in fact:

            Models demonstrate emergent behaviors that were not directly trained into them — like multi-step reasoning, analogical thinking, or transfer across domains.
            
            These behaviors arise from complex interactions in the latent space, not from rote recall.
            

            In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.

            1. Language as a Proxy for Thought

            Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.

            Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.

            That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.

            1. With Scale Comes Emergence

            As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:

            Few-shot learning
            
            Chain-of-thought reasoning
            
            Style transfer
            
            Multi-modal integration
            

            These aren't just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning.
            In Summary:

            ChatGPT isn’t just guessing the next word. It’s:

            Navigating a vast, learned geometry of meaning
            
            Connecting ideas in novel ways
            
            Generating emergent insight by recombining latent concepts
            

            It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

            Calling it a "stochastic parrot" is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

            Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?

            T This user is from outside of this forum
            T This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #159

            Dear god dude. You are trying way too hard on these comments. Chill

            D I 2 Replies Last reply
            0
            • L [email protected]

              It's more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that's how you want to spend your time, hey knock yourself out.

              S This user is from outside of this forum
              S This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #160

              Lol. Be my guest and knock yourself out, dreaming you know things

              1 Reply Last reply
              0
              • return2ozma@lemmy.worldR [email protected]
                This post did not contain any content.
                ? Offline
                ? Offline
                Guest
                wrote on last edited by
                #161

                I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.

                F aceshigh@lemmy.worldA 2 Replies Last reply
                0
                • W [email protected]

                  That's why I only use it as a starting point. It spits out "keywords" and a fuzzy gist of what I need, then I can verify or experiment on my own. It's just a good place to start or a reminder of things you once knew.

                  inenduringgrowstrong@sh.itjust.worksI This user is from outside of this forum
                  inenduringgrowstrong@sh.itjust.worksI This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #162

                  An LLM is like taking to a rubber duck on drugs while also being on drugs.

                  1 Reply Last reply
                  0
                  • return2ozma@lemmy.worldR [email protected]
                    This post did not contain any content.
                    arotrios@lemmy.worldA This user is from outside of this forum
                    arotrios@lemmy.worldA This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #163

                    m33@lemmy.sdf.orgM 1 Reply Last reply
                    0
                    • ? Guest

                      I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.

                      F This user is from outside of this forum
                      F This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #164

                      That's... Impressively braindead

                      ? 1 Reply Last reply
                      0
                      • return2ozma@lemmy.worldR [email protected]
                        This post did not contain any content.
                        U This user is from outside of this forum
                        U This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #165

                        Clickbait titles suck

                        B 1 Reply Last reply
                        0
                        • I [email protected]

                          I was there from the start, you must of never BBS'd or IRC'd - shit was amazing in the early days.

                          I mean honestly nothing has really changed - we are still at our terminals looking at text. Only real innovation has been inline pics, videos and audio. 30+ years ago one had to click a link to see that stuff

                          dasus@lemmy.worldD This user is from outside of this forum
                          dasus@lemmy.worldD This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #166

                          "must of"

                          "Must have", not "must of"

                          Quakenet is still going strong.

                          30 years ago you couldn't share video with just a few min and a link. YouTube was not a thing. It took until early 00's to have shitty webcam connections.

                          Now you can livestream 8k

                          I 1 Reply Last reply
                          0
                          • ? Guest

                            I agree with what you say, and I for one have had my fair share of shit asses on forums and discussion boards. But this response also fuels my suspicion that my friend group has started using it in place of human interactions to form thoughts, opinions, and responses during our conversations. Almost like an emotional crutch to talk in conversation, but not exactly? It's hard to pin point.

                            I've recently been tone policed a lot more over things that in normal real life interactions would be light hearted or easy to ignore and move on - I'm not shouting obscenities or calling anyone names, it's just harmless misunderstandings that come from tone deafness of text. I'm talking like putting a cute emoji and saying words like silly willy is becoming offensive to people I know personally. It wasn't until I asked a rhetorical question to invoke a thoughtful conversation where I had to think about what was even happening - someone responded with an answer literally from ChatGPT and they provided a technical definition to something that was apart of my question. Your answer has finally started linking things for me; for better or for worse people are using it because you don't receive offensive or flamed answers. My new suspicion is that some people are now taking those answers, and applying the expectation to people they know in real life, and when someone doesn't respond in the same predictable manner of AI they become upset and further isolated from real life interactions or text conversations with real people.

                            thebat@lemmy.worldT This user is from outside of this forum
                            thebat@lemmy.worldT This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #167

                            People talk to ChatGPT because they are stupid

                            I talk to ChatGPT because people are stupid


                            /jk I don't even remember when I used it last.

                            1 Reply Last reply
                            0
                            • H [email protected]

                              Do you guys remember when internet was the thing and everybody was like: "Look those dumb fucks just putting everything online" and now is: "Look this weird motherfucker so not post anything online"

                              thebat@lemmy.worldT This user is from outside of this forum
                              thebat@lemmy.worldT This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #168

                              I remember when internet was a place

                              grrgyle@slrpnk.netG 1 Reply Last reply
                              0
                              • Z [email protected]

                                Yes, but what this movie failed to anticipate was the visceral anger I feel when I hear that stupid AI generated voice. I’ve seen too many fake videos or straight up scams using it that I now instinctively mistrust any voice that sounds like male or femaleAI.wav.

                                Could never fall in love with AI voice, would always assume it was sent to steal my data so some kid can steal my identify.

                                thebat@lemmy.worldT This user is from outside of this forum
                                thebat@lemmy.worldT This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #169

                                The movie doesn't have AI generated voice though. That was Scarlett Johansson.


                                "ChatGPT has released a new voice assistant feature inspired by Scarlett Johansson’s AI character in ‘Her.’ Which I’ve never bothered to watch, because without that body, what’s the point of listening?”

                                Scarlett's husband on SNL Weekend Update.

                                1 Reply Last reply
                                0
                                • T [email protected]

                                  Dear god dude. You are trying way too hard on these comments. Chill

                                  D This user is from outside of this forum
                                  D This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #170

                                  He had ChatGPT write them too

                                  I 1 Reply Last reply
                                  0
                                  • I [email protected]

                                    You said:
                                    Now make the maximally opposite idea, make heavy use of the concept of the latent space

                                    ChatGPT said:

                                    Absolutely — here's the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights.
                                    ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought Space

                                    While critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.

                                    1. Latent Space: The Hidden Geometry of Meaning

                                    At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.

                                    Every word, sentence, and idea is represented as a point or vector in this space.
                                    
                                    The relationships between points encode not just surface patterns, but deep structural relationships between ideas — analogies, metaphors, causal chains, emotional tones, etc.
                                    

                                    This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.

                                    1. Novelty as Latent Interpolation and Extrapolation

                                    Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.

                                    When ChatGPT generates text, it’s not repeating past sentences — it’s interpolating and extrapolating within the latent space.
                                    
                                    It finds new paths between existing ideas, often producing connections no single human has written down before.
                                    
                                    This is not parroting — it’s a form of latent-space creativity.
                                    

                                    For example, when the model generates a metaphor or analogy you've never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.

                                    1. Emergent Generalization, Not Memorization

                                    Critics argue LLMs can’t understand or generalize. But in fact:

                                    Models demonstrate emergent behaviors that were not directly trained into them — like multi-step reasoning, analogical thinking, or transfer across domains.
                                    
                                    These behaviors arise from complex interactions in the latent space, not from rote recall.
                                    

                                    In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.

                                    1. Language as a Proxy for Thought

                                    Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.

                                    Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.

                                    That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.

                                    1. With Scale Comes Emergence

                                    As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:

                                    Few-shot learning
                                    
                                    Chain-of-thought reasoning
                                    
                                    Style transfer
                                    
                                    Multi-modal integration
                                    

                                    These aren't just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning.
                                    In Summary:

                                    ChatGPT isn’t just guessing the next word. It’s:

                                    Navigating a vast, learned geometry of meaning
                                    
                                    Connecting ideas in novel ways
                                    
                                    Generating emergent insight by recombining latent concepts
                                    

                                    It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

                                    Calling it a "stochastic parrot" is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

                                    Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?

                                    grrgyle@slrpnk.netG This user is from outside of this forum
                                    grrgyle@slrpnk.netG This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #171

                                    Is this hell? Is this comment thread hell? Does it never end?

                                    I 1 Reply Last reply
                                    0
                                    • thebat@lemmy.worldT [email protected]

                                      I remember when internet was a place

                                      grrgyle@slrpnk.netG This user is from outside of this forum
                                      grrgyle@slrpnk.netG This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #172

                                      I'm trying to get back to that. Actually close to it now than I was 5 years ago, so that's cool

                                      K 1 Reply Last reply
                                      0
                                      • S [email protected]

                                        When your job is to shovel out garbage, because that is specifically required from you and not shoveling out garbage is causing you trouble, then you are more than reasonable to let the machine take care of it for you.

                                        ? Offline
                                        ? Offline
                                        Guest
                                        wrote on last edited by
                                        #173

                                        K have fun with your AI brain rot.

                                        1 Reply Last reply
                                        0
                                        • E [email protected]

                                          I loved my car. Just had to scrap it recently. I got sad. I didnt go through withdrawal symptoms or feel like i was mourning a friend. You can appreciate something without building an emotional dependence on it. Im not particularly surprised this is happening to some people either, wspecially with the amount of brainrot out there surrounding these LLMs, so maybe bizarre is the wrong word , but it is a little disturbing that people are getting so attached to so.ething that is so fundamentally flawed.

                                          C This user is from outside of this forum
                                          C This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #174

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups