Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Scheduled Pinned Locked Moved Technology
technology
233 Posts 150 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • dasus@lemmy.worldD [email protected]

    I remember when the Internet was a thing people went on and/or visited/surfed, but not something you'd imagine having 247.

    I This user is from outside of this forum
    I This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #144

    I was there from the start, you must of never BBS'd or IRC'd - shit was amazing in the early days.

    I mean honestly nothing has really changed - we are still at our terminals looking at text. Only real innovation has been inline pics, videos and audio. 30+ years ago one had to click a link to see that stuff

    dasus@lemmy.worldD 1 Reply Last reply
    0
    • nostradavid@programming.devN [email protected]

      Presuming you're writing in Python: Check out https://docs.astral.sh/ruff/

      It's an all-in-one tool that combines several older (pre-existing) tools. Very fast, very cool.

      ? Offline
      ? Offline
      Guest
      wrote on last edited by
      #145

      I'm looking to integrate ruff more into my workflow. Do you have any tips coming from black + isort?

      1 Reply Last reply
      0
      • E [email protected]

        If you actually read the article Im 0retty sure the bizzarre thing is really these people using a 'tool' forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a 'friend'.

        W This user is from outside of this forum
        W This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #146

        You never viewed a tool as a friend? Pretty sure there are some guys that like their cars more than most friends. Bonding with objects isn't that weird, especially one that can talk to you like it's human.

        L 1 Reply Last reply
        0
        • return2ozma@lemmy.worldR [email protected]
          This post did not contain any content.
          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #147

          People addicted to tech omg who could've guessed. Shocked I tell you.

          1 Reply Last reply
          0
          • starman2112@sh.itjust.worksS [email protected]

            I am so happy God made me a Luddite

            W This user is from outside of this forum
            W This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #148

            Yeah look at all this technology you can't use! It's so empowering.

            starman2112@sh.itjust.worksS 1 Reply Last reply
            0
            • remembertheapollo_@lemmy.worldR [email protected]

              I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things. I got an AI search response just yesterday that dramatically understated an issue by citing an unscientific ideologically based website with high interest and reason to minimize said issue. The actual studies showed a 6x difference. It was blatant AF, and I can’t understand why anyone would rely on such a system for reliable, objective information or responses. I have noted several incorrect AI responses to queries, and people mindlessly citing said response without verifying the data or its source. People gonna get stupider, faster.

              W This user is from outside of this forum
              W This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #149

              That's why I only use it as a starting point. It spits out "keywords" and a fuzzy gist of what I need, then I can verify or experiment on my own. It's just a good place to start or a reminder of things you once knew.

              inenduringgrowstrong@sh.itjust.worksI 1 Reply Last reply
              0
              • return2ozma@lemmy.worldR [email protected]
                This post did not contain any content.
                G This user is from outside of this forum
                G This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #150

                That is peak clickbait, bravo.

                1 Reply Last reply
                0
                • W [email protected]

                  Yeah look at all this technology you can't use! It's so empowering.

                  starman2112@sh.itjust.worksS This user is from outside of this forum
                  starman2112@sh.itjust.worksS This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #151

                  Can, and opt not to. Big difference. I'm sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own

                  Same with many aspects of modern technology. Like, I'm sure it's very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else's computer turns off, I'd like to keep control over my things

                  1 Reply Last reply
                  0
                  • Z [email protected]

                    I couldn’t be bothered to read it, so I got CharGPT to summarise it. Apparently there’s nothing to worry about.

                    I This user is from outside of this forum
                    I This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #152

                    You didn't, I did

                    The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.

                    Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

                    You said:

                    Make a list of the main points and for each explain why it will be worse when it is society wide

                    ChatGPT said:

                    Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

                    Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
                    
                    Why it's worse at scale:
                    
                    If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.
                    
                    Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
                    
                    Why it's worse at scale:
                    
                    A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.
                    
                    Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
                    
                    Why it's worse at scale:
                    
                    This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.
                    
                    People stress out over subtle changes in ChatGPT’s behavior:
                    
                    Why it's worse at scale:
                    
                    If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.
                    
                    Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
                    
                    Why it's worse at scale:
                    
                    If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.
                    
                    Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
                    
                    Why it's worse at scale:
                    
                    This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.
                    
                    Prolonged usage increases risk of emotional dependence, regardless of how it's used:
                    
                    Why it's worse at scale:
                    
                    As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
                    

                    Let me know if you want this formatted for a presentation, policy paper, or discussion.

                    I F liv@lemmy.nzL 3 Replies Last reply
                    0
                    • I [email protected]

                      You didn't, I did

                      The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.

                      Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

                      You said:

                      Make a list of the main points and for each explain why it will be worse when it is society wide

                      ChatGPT said:

                      Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

                      Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
                      
                      Why it's worse at scale:
                      
                      If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.
                      
                      Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
                      
                      Why it's worse at scale:
                      
                      A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.
                      
                      Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
                      
                      Why it's worse at scale:
                      
                      This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.
                      
                      People stress out over subtle changes in ChatGPT’s behavior:
                      
                      Why it's worse at scale:
                      
                      If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.
                      
                      Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
                      
                      Why it's worse at scale:
                      
                      If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.
                      
                      Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
                      
                      Why it's worse at scale:
                      
                      This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.
                      
                      Prolonged usage increases risk of emotional dependence, regardless of how it's used:
                      
                      Why it's worse at scale:
                      
                      As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
                      

                      Let me know if you want this formatted for a presentation, policy paper, or discussion.

                      I This user is from outside of this forum
                      I This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #153

                      You said:
                      Take the same list again, for each point explain why it's actually beneficial and how it will get better with increased adoption

                      ChatGPT said:

                      Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:

                      Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
                      
                      Why it could be beneficial:
                      
                      If people are turning to ChatGPT frequently, it may indicate that it's providing meaningful value — like reducing cognitive load, aiding creativity, or offering emotional support. As usage becomes more common, society may develop healthier digital habits, just as it has with smartphones and social media. Over time, norms and tools (e.g., usage timers, digital wellness features) could help people manage their use responsibly.
                      
                      Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
                      
                      Why it could be beneficial:
                      
                      For people lacking companionship or emotional support, especially the elderly or isolated, AI companions can provide comfort, conversation, and a sense of presence. On a wider scale, AI could reduce loneliness-related health problems and even support mental wellness initiatives by being an accessible form of care or early intervention.
                      
                      Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
                      
                      Why it could be beneficial:
                      
                      If AI offers a low-barrier outlet for the lonely, it might be a first step toward re-engagement with the world. AI can encourage people to express feelings they wouldn't share with others and even prompt them to seek human connection, therapy, or support services. Widespread AI use could destigmatize conversations about loneliness and mental health.
                      
                      People stress out over subtle changes in ChatGPT’s behavior:
                      
                      Why it could be beneficial:
                      
                      This shows users are emotionally invested — which can push developers to create more consistent, emotionally intelligent systems. If society recognizes the emotional impact of AI, it will encourage better design, transparency, and regulation. Over time, this can lead to more stable and human-aware AI that positively complements emotional needs.
                      
                      Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
                      
                      Why it could be beneficial:
                      
                      This gives users multiple channels of expression to suit their mood or needs — voice for brief emotional boosts, text for deeper reflection. With broader adoption, people can tailor AI interaction styles to maximize personal benefit. This insight also allows designers to create features that better match emotional and mental health goals.
                      
                      Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
                      
                      Why it could be beneficial:
                      
                      This means AI is becoming a highly trusted tool, even in practical domains. As more people adopt it for brainstorming, research, or decision-making, we’ll likely see a democratization of high-level thinking tools. Over time, dependency could evolve into partnership — similar to how calculators or GPS became essential but ultimately empowering tools.
                      
                      Prolonged usage increases risk of emotional dependence, regardless of how it's used:
                      
                      Why it could be beneficial:
                      
                      Long-term usage can foster deeper understanding of one’s own thought processes, communication patterns, and emotional states. As AI integrates into daily life, people might develop greater self-awareness and reflection through their interactions. Society could benefit from individuals who are more emotionally literate and thoughtful, even if that awareness is sparked by an AI relationship.
                      

                      Let me know if you'd like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.

                      You said:
                      Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid

                      ChatGPT said:

                      Alright, here's the breakdown of that critique — the "stochastic parrot" idea and the comparison to advanced autocomplete — explained clearly and fairly:

                      1. What does “stochastic parrot” mean?

                      The term "stochastic parrot" was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:

                      "Stochastic" means based on probability. The model doesn’t “think” or “understand”; it predicts the next word based on statistical likelihood, given the words before it.
                      
                      "Parrot" because it doesn’t generate new ideas—it repeats patterns and phrases it has seen in training data, like a parrot mimics human speech.
                      

                      So, calling ChatGPT a stochastic parrot is a way of saying:
                      It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.

                      1. It can’t say anything fundamentally new — why?

                      By design, ChatGPT:

                      Is trained on vast amounts of existing text from books, websites, forums, and more.
                      
                      Learns the statistical patterns of how humans use language.
                      
                      Generates responses by predicting the next most likely word or phrase based on that training.
                      

                      Because it’s not connected to lived experience, emotion, consciousness, or intent, it:

                      Can’t invent genuinely new concepts.
                      
                      Can’t form subjective opinions.
                      
                      Can’t create knowledge that isn’t derived from patterns it has already seen.
                      

                      It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.

                      1. It's like "autocomplete on steroids" — how so?

                      If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you...” and it might suggest “...have a great day.”

                      ChatGPT is essentially the same mechanism:

                      Autocomplete predicts the next word based on simple models.
                      
                      ChatGPT predicts sequences of words using a far more complex, nuanced statistical model trained on billions of examples.
                      

                      So e:

                      It can hold context better.
                      
                      It can imitate different styles.
                      
                      But it's still guessing the next words, not “thinking” or “understanding.”
                      

                      In summary:

                      Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:

                      It doesn’t understand meaning.
                      
                      It doesn’t create knowledge.
                      
                      It reflects patterns in language, not consciousness or intelligence.
                      

                      It’s powerful and useful — but it’s important not to mistake fluency for understanding.

                      Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.

                      I 1 Reply Last reply
                      0
                      • I [email protected]

                        You said:
                        Take the same list again, for each point explain why it's actually beneficial and how it will get better with increased adoption

                        ChatGPT said:

                        Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:

                        Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
                        
                        Why it could be beneficial:
                        
                        If people are turning to ChatGPT frequently, it may indicate that it's providing meaningful value — like reducing cognitive load, aiding creativity, or offering emotional support. As usage becomes more common, society may develop healthier digital habits, just as it has with smartphones and social media. Over time, norms and tools (e.g., usage timers, digital wellness features) could help people manage their use responsibly.
                        
                        Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
                        
                        Why it could be beneficial:
                        
                        For people lacking companionship or emotional support, especially the elderly or isolated, AI companions can provide comfort, conversation, and a sense of presence. On a wider scale, AI could reduce loneliness-related health problems and even support mental wellness initiatives by being an accessible form of care or early intervention.
                        
                        Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
                        
                        Why it could be beneficial:
                        
                        If AI offers a low-barrier outlet for the lonely, it might be a first step toward re-engagement with the world. AI can encourage people to express feelings they wouldn't share with others and even prompt them to seek human connection, therapy, or support services. Widespread AI use could destigmatize conversations about loneliness and mental health.
                        
                        People stress out over subtle changes in ChatGPT’s behavior:
                        
                        Why it could be beneficial:
                        
                        This shows users are emotionally invested — which can push developers to create more consistent, emotionally intelligent systems. If society recognizes the emotional impact of AI, it will encourage better design, transparency, and regulation. Over time, this can lead to more stable and human-aware AI that positively complements emotional needs.
                        
                        Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
                        
                        Why it could be beneficial:
                        
                        This gives users multiple channels of expression to suit their mood or needs — voice for brief emotional boosts, text for deeper reflection. With broader adoption, people can tailor AI interaction styles to maximize personal benefit. This insight also allows designers to create features that better match emotional and mental health goals.
                        
                        Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
                        
                        Why it could be beneficial:
                        
                        This means AI is becoming a highly trusted tool, even in practical domains. As more people adopt it for brainstorming, research, or decision-making, we’ll likely see a democratization of high-level thinking tools. Over time, dependency could evolve into partnership — similar to how calculators or GPS became essential but ultimately empowering tools.
                        
                        Prolonged usage increases risk of emotional dependence, regardless of how it's used:
                        
                        Why it could be beneficial:
                        
                        Long-term usage can foster deeper understanding of one’s own thought processes, communication patterns, and emotional states. As AI integrates into daily life, people might develop greater self-awareness and reflection through their interactions. Society could benefit from individuals who are more emotionally literate and thoughtful, even if that awareness is sparked by an AI relationship.
                        

                        Let me know if you'd like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.

                        You said:
                        Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid

                        ChatGPT said:

                        Alright, here's the breakdown of that critique — the "stochastic parrot" idea and the comparison to advanced autocomplete — explained clearly and fairly:

                        1. What does “stochastic parrot” mean?

                        The term "stochastic parrot" was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:

                        "Stochastic" means based on probability. The model doesn’t “think” or “understand”; it predicts the next word based on statistical likelihood, given the words before it.
                        
                        "Parrot" because it doesn’t generate new ideas—it repeats patterns and phrases it has seen in training data, like a parrot mimics human speech.
                        

                        So, calling ChatGPT a stochastic parrot is a way of saying:
                        It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.

                        1. It can’t say anything fundamentally new — why?

                        By design, ChatGPT:

                        Is trained on vast amounts of existing text from books, websites, forums, and more.
                        
                        Learns the statistical patterns of how humans use language.
                        
                        Generates responses by predicting the next most likely word or phrase based on that training.
                        

                        Because it’s not connected to lived experience, emotion, consciousness, or intent, it:

                        Can’t invent genuinely new concepts.
                        
                        Can’t form subjective opinions.
                        
                        Can’t create knowledge that isn’t derived from patterns it has already seen.
                        

                        It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.

                        1. It's like "autocomplete on steroids" — how so?

                        If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you...” and it might suggest “...have a great day.”

                        ChatGPT is essentially the same mechanism:

                        Autocomplete predicts the next word based on simple models.
                        
                        ChatGPT predicts sequences of words using a far more complex, nuanced statistical model trained on billions of examples.
                        

                        So e:

                        It can hold context better.
                        
                        It can imitate different styles.
                        
                        But it's still guessing the next words, not “thinking” or “understanding.”
                        

                        In summary:

                        Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:

                        It doesn’t understand meaning.
                        
                        It doesn’t create knowledge.
                        
                        It reflects patterns in language, not consciousness or intelligence.
                        

                        It’s powerful and useful — but it’s important not to mistake fluency for understanding.

                        Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.

                        I This user is from outside of this forum
                        I This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #154

                        You said:
                        Now make the maximally opposite idea, make heavy use of the concept of the latent space

                        ChatGPT said:

                        Absolutely — here's the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights.
                        ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought Space

                        While critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.

                        1. Latent Space: The Hidden Geometry of Meaning

                        At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.

                        Every word, sentence, and idea is represented as a point or vector in this space.
                        
                        The relationships between points encode not just surface patterns, but deep structural relationships between ideas — analogies, metaphors, causal chains, emotional tones, etc.
                        

                        This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.

                        1. Novelty as Latent Interpolation and Extrapolation

                        Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.

                        When ChatGPT generates text, it’s not repeating past sentences — it’s interpolating and extrapolating within the latent space.
                        
                        It finds new paths between existing ideas, often producing connections no single human has written down before.
                        
                        This is not parroting — it’s a form of latent-space creativity.
                        

                        For example, when the model generates a metaphor or analogy you've never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.

                        1. Emergent Generalization, Not Memorization

                        Critics argue LLMs can’t understand or generalize. But in fact:

                        Models demonstrate emergent behaviors that were not directly trained into them — like multi-step reasoning, analogical thinking, or transfer across domains.
                        
                        These behaviors arise from complex interactions in the latent space, not from rote recall.
                        

                        In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.

                        1. Language as a Proxy for Thought

                        Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.

                        Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.

                        That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.

                        1. With Scale Comes Emergence

                        As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:

                        Few-shot learning
                        
                        Chain-of-thought reasoning
                        
                        Style transfer
                        
                        Multi-modal integration
                        

                        These aren't just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning.
                        In Summary:

                        ChatGPT isn’t just guessing the next word. It’s:

                        Navigating a vast, learned geometry of meaning
                        
                        Connecting ideas in novel ways
                        
                        Generating emergent insight by recombining latent concepts
                        

                        It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

                        Calling it a "stochastic parrot" is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

                        Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?

                        T grrgyle@slrpnk.netG hark@lemmy.worldH 3 Replies Last reply
                        0
                        • E [email protected]

                          I don't understand what people even use it for.

                          bilb@lem.monsterB This user is from outside of this forum
                          bilb@lem.monsterB This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #155

                          I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn't like something I do, too bad. The genius AI knows better, and I only care about what it has to say.

                          1 Reply Last reply
                          0
                          • I [email protected]

                            You didn't, I did

                            The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.

                            Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

                            You said:

                            Make a list of the main points and for each explain why it will be worse when it is society wide

                            ChatGPT said:

                            Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

                            Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
                            
                            Why it's worse at scale:
                            
                            If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.
                            
                            Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
                            
                            Why it's worse at scale:
                            
                            A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.
                            
                            Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
                            
                            Why it's worse at scale:
                            
                            This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.
                            
                            People stress out over subtle changes in ChatGPT’s behavior:
                            
                            Why it's worse at scale:
                            
                            If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.
                            
                            Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
                            
                            Why it's worse at scale:
                            
                            If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.
                            
                            Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
                            
                            Why it's worse at scale:
                            
                            This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.
                            
                            Prolonged usage increases risk of emotional dependence, regardless of how it's used:
                            
                            Why it's worse at scale:
                            
                            As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
                            

                            Let me know if you want this formatted for a presentation, policy paper, or discussion.

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #156

                            What's the point of a summary that's longer than the article itself?

                            I 1 Reply Last reply
                            0
                            • graphene@lemm.eeG [email protected]

                              And the amount of connections and friends the average person has has been in free fall for decades...

                              ? Offline
                              ? Offline
                              Guest
                              wrote on last edited by
                              #157

                              I dunno. I connected with more people on reddit and Twitter than irl tbh.

                              Different connection but real and valid nonetheless.

                              I'm thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.

                              I 1 Reply Last reply
                              0
                              • nostradavid@programming.devN [email protected]

                                That was clear from GPT-3, day 1.

                                I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time...

                                ? Offline
                                ? Offline
                                Guest
                                wrote on last edited by
                                #158

                                Ugh, that hit me hard. Poor lady. I hope it helped in some way.

                                1 Reply Last reply
                                0
                                • I [email protected]

                                  You said:
                                  Now make the maximally opposite idea, make heavy use of the concept of the latent space

                                  ChatGPT said:

                                  Absolutely — here's the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights.
                                  ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought Space

                                  While critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.

                                  1. Latent Space: The Hidden Geometry of Meaning

                                  At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.

                                  Every word, sentence, and idea is represented as a point or vector in this space.
                                  
                                  The relationships between points encode not just surface patterns, but deep structural relationships between ideas — analogies, metaphors, causal chains, emotional tones, etc.
                                  

                                  This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.

                                  1. Novelty as Latent Interpolation and Extrapolation

                                  Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.

                                  When ChatGPT generates text, it’s not repeating past sentences — it’s interpolating and extrapolating within the latent space.
                                  
                                  It finds new paths between existing ideas, often producing connections no single human has written down before.
                                  
                                  This is not parroting — it’s a form of latent-space creativity.
                                  

                                  For example, when the model generates a metaphor or analogy you've never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.

                                  1. Emergent Generalization, Not Memorization

                                  Critics argue LLMs can’t understand or generalize. But in fact:

                                  Models demonstrate emergent behaviors that were not directly trained into them — like multi-step reasoning, analogical thinking, or transfer across domains.
                                  
                                  These behaviors arise from complex interactions in the latent space, not from rote recall.
                                  

                                  In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.

                                  1. Language as a Proxy for Thought

                                  Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.

                                  Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.

                                  That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.

                                  1. With Scale Comes Emergence

                                  As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:

                                  Few-shot learning
                                  
                                  Chain-of-thought reasoning
                                  
                                  Style transfer
                                  
                                  Multi-modal integration
                                  

                                  These aren't just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning.
                                  In Summary:

                                  ChatGPT isn’t just guessing the next word. It’s:

                                  Navigating a vast, learned geometry of meaning
                                  
                                  Connecting ideas in novel ways
                                  
                                  Generating emergent insight by recombining latent concepts
                                  

                                  It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

                                  Calling it a "stochastic parrot" is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

                                  Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?

                                  T This user is from outside of this forum
                                  T This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #159

                                  Dear god dude. You are trying way too hard on these comments. Chill

                                  D I 2 Replies Last reply
                                  0
                                  • L [email protected]

                                    It's more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that's how you want to spend your time, hey knock yourself out.

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #160

                                    Lol. Be my guest and knock yourself out, dreaming you know things

                                    1 Reply Last reply
                                    0
                                    • return2ozma@lemmy.worldR [email protected]
                                      This post did not contain any content.
                                      ? Offline
                                      ? Offline
                                      Guest
                                      wrote on last edited by
                                      #161

                                      I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.

                                      F aceshigh@lemmy.worldA 2 Replies Last reply
                                      0
                                      • W [email protected]

                                        That's why I only use it as a starting point. It spits out "keywords" and a fuzzy gist of what I need, then I can verify or experiment on my own. It's just a good place to start or a reminder of things you once knew.

                                        inenduringgrowstrong@sh.itjust.worksI This user is from outside of this forum
                                        inenduringgrowstrong@sh.itjust.worksI This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #162

                                        An LLM is like taking to a rubber duck on drugs while also being on drugs.

                                        1 Reply Last reply
                                        0
                                        • return2ozma@lemmy.worldR [email protected]
                                          This post did not contain any content.
                                          arotrios@lemmy.worldA This user is from outside of this forum
                                          arotrios@lemmy.worldA This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #163

                                          m33@lemmy.sdf.orgM 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups