Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Scheduled Pinned Locked Moved Technology
technology
233 Posts 150 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • mycelium_underground@lemmy.worldM [email protected]

    I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I'm pretty sure I can get better answers from the LLMs tied to Kagi search.

    AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a "search" engine like Google now).

    Don't be the product.

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #219

    Depending on what I needed I remember using AltaVista, AskJeeves, Dogpile, and I feel like later on MetaCrawler or something like that (would search multiple search engines for you and ordered them scored based on platform and relevancy iirc?)

    J liv@lemmy.nzL 2 Replies Last reply
    0
    • B [email protected]

      Depending on what I needed I remember using AltaVista, AskJeeves, Dogpile, and I feel like later on MetaCrawler or something like that (would search multiple search engines for you and ordered them scored based on platform and relevancy iirc?)

      J This user is from outside of this forum
      J This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #220

      I miss AskJeeves. Those were some great commercials too.

      1 Reply Last reply
      0
      • F [email protected]

        That's... Impressively braindead

        ? Offline
        ? Offline
        Guest
        wrote on last edited by
        #221

        That’s the joke!

        1 Reply Last reply
        0
        • dasus@lemmy.worldD [email protected]

          That's like saying cuneiform on small tablets is the same as writing after the Gutenberg press. "It's all just symbols marked on a medium."

          But I think you know that there's quite a difference in being able to print books on an industrial scale and a few priests knowing how to write down taxation in cuneiform on stone tablets.

          I actually do have(and still use) the same computer I had in the late 90's

          My point exactly. You're saying things haven't changed because you're stuck. The world has changed in 30 years. Arguably more than in any previous set of 30 year cycles.

          30 000 hours uploaded every hour. In 1995 there weren't even 30 000 websites. >5.5 billion users versus 16 million. Do you have any idea just how huge that difference is? Here's a hint, the difference is about ~5.5 billion people.

          Like what's your crusade in arguing that "it's all still the same" when it's obviously completely different.

          I This user is from outside of this forum
          I This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #222

          My guy, everything has changed and yet nothing has changed. You are arguing specifics while I'm arguing bigger picture. I'm not saying your wrong or I'm wrong, I just think we have a different view and measuring with a different kind of ruler, and yet I don't think we are really in disagreement, this seems to be arguing over semantics of what videos were in ~95 (very low res and shit quality fyi) and how many people had access to those shit quality videos - I'm just saying they were always there

          dasus@lemmy.worldD 1 Reply Last reply
          0
          • I [email protected]

            My guy, everything has changed and yet nothing has changed. You are arguing specifics while I'm arguing bigger picture. I'm not saying your wrong or I'm wrong, I just think we have a different view and measuring with a different kind of ruler, and yet I don't think we are really in disagreement, this seems to be arguing over semantics of what videos were in ~95 (very low res and shit quality fyi) and how many people had access to those shit quality videos - I'm just saying they were always there

            dasus@lemmy.worldD This user is from outside of this forum
            dasus@lemmy.worldD This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #223

            "I'm arguing bigger picture"

            No, you're specifically doing the opposite.

            Your taking what you did as a novelty, niche hobby in the 90's and saying "it's just the same now".

            It literally isn't. The difference in users is about 5.5 billion.

            over semantics of what videos were in ~95 (very low res and shit quality fyi)

            Consumer digital video cameras didn't even exist until 1995.

            I'm just saying they were always there

            You arguing that the "it's the same now" is exactly the same thing as saying literature was "exactly the same" 500 years and 5000 years because they share some very base level features.

            Again, the difference is users is 5.5 billion to the 90's. Wireless connections are everywhere.

            Imagined how dumb it would be to argue cars are the same as when they were invented. "They still have wheels and an engine and you steer them to go about. We had the exact same thing in 1885!"

            1 Reply Last reply
            0
            • hark@lemmy.worldH [email protected]

              It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

              Calling it a “stochastic parrot” is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

              It reads like the brainless drivel that corporate drones are forced to churn out, complete with meaningless fluff words. This is why the executives love AI, they read and expect that trash all the time and think it's suitable for everything.

              Executives are perfectly content with what looks good at a cursory glance and don't care about what's actually good in practice because their job is to make themselves seem more important than they actually are.

              I This user is from outside of this forum
              I This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #224

              I literally asked it to make the maximalist case against the idea that LLM are just autocomplete and that's exactly what it did.

              The message before that did the opposite case.

              1 Reply Last reply
              0
              • A [email protected]

                Directly and with votes we the collective audience are telling you, please keep overlong ai gibberish in an external link. If that makes it get fewer clicks then perhaps it's not that interesting

                I This user is from outside of this forum
                I This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #225

                I'll just cut it down to bite sized pieces and stop labelling it as AI generated

                1 Reply Last reply
                0
                • I [email protected]

                  You didn't, I did

                  The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.

                  Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

                  You said:

                  Make a list of the main points and for each explain why it will be worse when it is society wide

                  ChatGPT said:

                  Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

                  Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
                  
                  Why it's worse at scale:
                  
                  If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.
                  
                  Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
                  
                  Why it's worse at scale:
                  
                  A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.
                  
                  Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
                  
                  Why it's worse at scale:
                  
                  This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.
                  
                  People stress out over subtle changes in ChatGPT’s behavior:
                  
                  Why it's worse at scale:
                  
                  If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.
                  
                  Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
                  
                  Why it's worse at scale:
                  
                  If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.
                  
                  Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
                  
                  Why it's worse at scale:
                  
                  This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.
                  
                  Prolonged usage increases risk of emotional dependence, regardless of how it's used:
                  
                  Why it's worse at scale:
                  
                  As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
                  

                  Let me know if you want this formatted for a presentation, policy paper, or discussion.

                  liv@lemmy.nzL This user is from outside of this forum
                  liv@lemmy.nzL This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #226

                  That's really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.

                  Namely, the increased power of AI's big tech company owners over people that emotional dependence creates.

                  It's not as if these concepts aren't widely discussed online, everything from Meta's emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.

                  I 1 Reply Last reply
                  0
                  • B [email protected]

                    Depending on what I needed I remember using AltaVista, AskJeeves, Dogpile, and I feel like later on MetaCrawler or something like that (would search multiple search engines for you and ordered them scored based on platform and relevancy iirc?)

                    liv@lemmy.nzL This user is from outside of this forum
                    liv@lemmy.nzL This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #227

                    And Hastalavista if you wanted to find things that Altavista didn't.

                    1 Reply Last reply
                    0
                    • B [email protected]

                      I'm confused. If someone is in a place where they are choosing between dating a body pillow and suicide, then they have DEFINITELY made a wrong turn somewhere. They need some kind of assistance, and I hope they can get what they need, no matter what they choose.

                      I think my statement about "a wrong turn in life" is being interpreted too strongly; it wasn't intended to be such a strong and absolute statement of failure. Someone who's taken a wrong turn has simply made a mistake. It could be minor, it could be serious. I'm not saying their life is worthless. I've made a TON of wrong turns myself.

                      liv@lemmy.nzL This user is from outside of this forum
                      liv@lemmy.nzL This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #228

                      Trouble is your statement was in answer to @[email protected]'s comment that labeling lonely people as losers is problematic.

                      Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to "cause" it.

                      B 1 Reply Last reply
                      0
                      • liv@lemmy.nzL [email protected]

                        Trouble is your statement was in answer to @[email protected]'s comment that labeling lonely people as losers is problematic.

                        Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to "cause" it.

                        B This user is from outside of this forum
                        B This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #229

                        That's an excellent point that I wasn't considering. Thank you for explaining what I was missing.

                        1 Reply Last reply
                        0
                        • liv@lemmy.nzL [email protected]

                          That's really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.

                          Namely, the increased power of AI's big tech company owners over people that emotional dependence creates.

                          It's not as if these concepts aren't widely discussed online, everything from Meta's emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.

                          I This user is from outside of this forum
                          I This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #230

                          It's the 4th point

                          liv@lemmy.nzL 1 Reply Last reply
                          0
                          • I [email protected]

                            It's the 4th point

                            liv@lemmy.nzL This user is from outside of this forum
                            liv@lemmy.nzL This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #231

                            Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-"lobotomized".

                            I 1 Reply Last reply
                            0
                            • liv@lemmy.nzL [email protected]

                              Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-"lobotomized".

                              I This user is from outside of this forum
                              I This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #232

                              I think this power to shape the available knowledge, removing it, paywalling it, based on discrimination, leveraging it, and finally manipulating for advertising, state security and personnal reason is why it should be illegal to privately own any ML/ AI models of any kind. Drive them all underground and only let the open ones benefit from sales in public.

                              1 Reply Last reply
                              0
                              • P [email protected]

                                Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI... we used to call OCR AI, now we know better.

                                L This user is from outside of this forum
                                L This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #233

                                LLM is a subset of ML, which is a subset of AI.

                                1 Reply Last reply
                                0
                                • System shared this topic on
                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • Login

                                • Login or register to search.
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • World
                                • Users
                                • Groups