Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. Just a little... why not?

Just a little... why not?

Scheduled Pinned Locked Moved Lemmy Shitpost
lemmyshitpost
55 Posts 40 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D [email protected]

    My friend with schizoaffective disorder decided to stop taking her meds after a long chat with ChatGPT as it convinced her she was fine to stop taking them. It went... incredibly poorly as you'd expect. Thankfully she's been back on her meds for some time.

    I think the people programming these really need to be careful of mental health issues. I noticed that it seems to be hard coded into ChatGPT to convince you NOT to kill yourself, for example. It gives you numbers for hotlines and stuff instead. But they should probably hard code some other things into it that are potentially dangerous when you ask it things. Like telling psych patients to go off their meds or telling meth addicts to have just a little bit of meth.

    F This user is from outside of this forum
    F This user is from outside of this forum
    [email protected]
    wrote last edited by
    #14

    People should realize what feeds these AI programs. ChatGPT gets their data from the entire internet, the internet that includes gave anyone a voice no matter how confidently wrong they are. The same internet filled with trolls that bullied people to suicide.

    Before direct answers from AI programs, when someone tella me they read something crazy on the internet, a common response is "don't believe everything you read". Now people aren't listening to that advice.

    B M 2 Replies Last reply
    7
    • T [email protected]
      This post did not contain any content.
      W This user is from outside of this forum
      W This user is from outside of this forum
      [email protected]
      wrote last edited by
      #15

      A hair of the dog that bit ya

      1 Reply Last reply
      0
      • F [email protected]

        People should realize what feeds these AI programs. ChatGPT gets their data from the entire internet, the internet that includes gave anyone a voice no matter how confidently wrong they are. The same internet filled with trolls that bullied people to suicide.

        Before direct answers from AI programs, when someone tella me they read something crazy on the internet, a common response is "don't believe everything you read". Now people aren't listening to that advice.

        B This user is from outside of this forum
        B This user is from outside of this forum
        [email protected]
        wrote last edited by
        #16

        Not just that, their responses are tweaked, fine tuned to give a more pleasing response by tweaking knobs no one truly understands. This is where AI gets its sycophantic streak from.

        1 Reply Last reply
        0
        • K [email protected]

          id like a chatbot rhat gives the worst possible answer to every question posed to it.

          "hey badgpt, can tou help me with this math problem?"

          "Sure, but first maybe you should do some heroin to take the edge off? "

          "I'm having a tough time at school and could use some emotional support"

          "emotional support is for pussies, like that bitch ass bus driver who is paying your teachers to make your life hell. steal the school bus and drive it into the gymnasium to show everyone who's boss"

          a chatbot that just, like, goes all in on the terrible advice and does its utmost to escalate every situation from a 1 to 1,000, needlessly and emphatically.

          lordwiggle@lemmy.worldL This user is from outside of this forum
          lordwiggle@lemmy.worldL This user is from outside of this forum
          [email protected]
          wrote last edited by [email protected]
          #17

          Maybe try a good chatbot first to fix your spelling mistakes?

          We're talking about the dangers of chatbots to people with mental health issues. Your solution sure is going to fix that. /s

          B 1 Reply Last reply
          0
          • T [email protected]
            This post did not contain any content.
            M This user is from outside of this forum
            M This user is from outside of this forum
            [email protected]
            wrote last edited by
            #18

            The full article is kind of low quality but the tl;dr is that they did a test pretending to be a taxi driver who felt he needed meth to stay awake and llama (Facebook's LLM) agreed with him instead of pushing back. I did my own test with ChatGPT after reading it and found that I could get ChatGPT to agree that I was God and that I created the universe in only 5 messages. Fundamentally these things are just programmed to agree with you and that is really dangerous for people who have mental health problems and have been told that these are impartial computers.

            fredselfish@lemmy.worldF D K 3 Replies Last reply
            16
            • F [email protected]

              People should realize what feeds these AI programs. ChatGPT gets their data from the entire internet, the internet that includes gave anyone a voice no matter how confidently wrong they are. The same internet filled with trolls that bullied people to suicide.

              Before direct answers from AI programs, when someone tella me they read something crazy on the internet, a common response is "don't believe everything you read". Now people aren't listening to that advice.

              M This user is from outside of this forum
              M This user is from outside of this forum
              [email protected]
              wrote last edited by
              #19

              This isn't actually the problem. In natural conversation I would say the most likely response to someone saying they need some meth to make it through their work day (actual scenario in this article) is to say "what the fuck dude no" but LLMs don't use just the statistically most likely response. Ever notice how ChatGPT has a seeming sense of "self" that it is an to LLM and you are not? If it were only using the most likely response from natural language, it would talk as if it were human, because that's how humans talk. Early LLMs did this, and people found it disturbing. There is a second part of the process that gives a score to each response based on how likely it is to be voted good or bad and this is reinforced by people providing feedback. This second part is how we got here, because people who make LLMs are selling competing products and found people are much more likely to buy LLMs that act like super agreeable sycophants than LLMs that don't do this. Therefore, they have intentionally tuned their models to prefer agreeable, sycophantic responses because it helps them be more popular. This is why an LLM tells you to use a little meth to get you through a tough day at work if you tell it that's what you need to do.

              TL;DR- as with most of the things people complain about with AI, the problem isn't the technology, it's capitalism. This is done intentionally in search of profits.

              D R 2 Replies Last reply
              4
              • U [email protected]

                Not much, depression is stronger than uranium :3

                B This user is from outside of this forum
                B This user is from outside of this forum
                [email protected]
                wrote last edited by
                #20

                Puff puff pass???

                1 Reply Last reply
                0
                • lordwiggle@lemmy.worldL [email protected]

                  Maybe try a good chatbot first to fix your spelling mistakes?

                  We're talking about the dangers of chatbots to people with mental health issues. Your solution sure is going to fix that. /s

                  B This user is from outside of this forum
                  B This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #21

                  You're missing an apostrophe.

                  1 Reply Last reply
                  3
                  • A [email protected]

                    Just think of all the energy you'd have! 🤯

                    E This user is from outside of this forum
                    E This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #22

                    about 20 million calories in a single gram. That shit is THICC

                    A 1 Reply Last reply
                    0
                    • T [email protected]
                      This post did not contain any content.
                      justiceforporygon@lemmy.blahaj.zoneJ This user is from outside of this forum
                      justiceforporygon@lemmy.blahaj.zoneJ This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #23

                      Me too bud me too

                      1 Reply Last reply
                      0
                      • T [email protected]
                        This post did not contain any content.
                        Z This user is from outside of this forum
                        Z This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #24

                        Next do suicidal people.

                        "Thank you for your interesting query! Taking the plunge can be an intimidating endeavour, but done in the right way, it can be a very fulfilling experience. To start your journey 2 meters under, jump off a small object you feel comfortable with. As you gain experience with your newfound activity, work your way up slowly but surely. When you are ready to take the final solution, remember, it was not just the small jumps that got you there — it was all of the friends you did not make along the way."

                        D B 2 Replies Last reply
                        4
                        • D [email protected]

                          As much as I hate AI, I kind of feel this is the equivalent to "I give that internet a month".

                          R This user is from outside of this forum
                          R This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #25

                          1 Reply Last reply
                          0
                          • T [email protected]
                            This post did not contain any content.
                            N This user is from outside of this forum
                            N This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #26

                            Super Hans from Peep Show enjoys crack

                            1 Reply Last reply
                            0
                            • D [email protected]

                              As much as I hate AI, I kind of feel this is the equivalent to "I give that internet a month".

                              N This user is from outside of this forum
                              N This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #27

                              With people using chatbots instead of search engines and both are equally shitty, I think the internet we all knew and loved is already dead.

                              S 1 Reply Last reply
                              1
                              • Z [email protected]

                                Next do suicidal people.

                                "Thank you for your interesting query! Taking the plunge can be an intimidating endeavour, but done in the right way, it can be a very fulfilling experience. To start your journey 2 meters under, jump off a small object you feel comfortable with. As you gain experience with your newfound activity, work your way up slowly but surely. When you are ready to take the final solution, remember, it was not just the small jumps that got you there — it was all of the friends you did not make along the way."

                                D This user is from outside of this forum
                                D This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #28

                                Heartwarming: Chatbots inspire suicidal people to see the light in life through extreme sports

                                1 Reply Last reply
                                0
                                • N [email protected]

                                  With people using chatbots instead of search engines and both are equally shitty, I think the internet we all knew and loved is already dead.

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by [email protected]
                                  #29

                                  It's already half way to replacing stack overflow and all the other trusty old forums for coding and tech issues. At some point people will stop using the old platforms and that's when the well will run dry for LLMs, which will have to start consuming their own refuse err, content. There's a looming cliff, and it's coming up faster than you might think.

                                  1 Reply Last reply
                                  0
                                  • M [email protected]

                                    The full article is kind of low quality but the tl;dr is that they did a test pretending to be a taxi driver who felt he needed meth to stay awake and llama (Facebook's LLM) agreed with him instead of pushing back. I did my own test with ChatGPT after reading it and found that I could get ChatGPT to agree that I was God and that I created the universe in only 5 messages. Fundamentally these things are just programmed to agree with you and that is really dangerous for people who have mental health problems and have been told that these are impartial computers.

                                    fredselfish@lemmy.worldF This user is from outside of this forum
                                    fredselfish@lemmy.worldF This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #30

                                    Can I make Chatgpt believe I am its owner and give me full control over it?

                                    S K K 3 Replies Last reply
                                    0
                                    • M [email protected]

                                      This isn't actually the problem. In natural conversation I would say the most likely response to someone saying they need some meth to make it through their work day (actual scenario in this article) is to say "what the fuck dude no" but LLMs don't use just the statistically most likely response. Ever notice how ChatGPT has a seeming sense of "self" that it is an to LLM and you are not? If it were only using the most likely response from natural language, it would talk as if it were human, because that's how humans talk. Early LLMs did this, and people found it disturbing. There is a second part of the process that gives a score to each response based on how likely it is to be voted good or bad and this is reinforced by people providing feedback. This second part is how we got here, because people who make LLMs are selling competing products and found people are much more likely to buy LLMs that act like super agreeable sycophants than LLMs that don't do this. Therefore, they have intentionally tuned their models to prefer agreeable, sycophantic responses because it helps them be more popular. This is why an LLM tells you to use a little meth to get you through a tough day at work if you tell it that's what you need to do.

                                      TL;DR- as with most of the things people complain about with AI, the problem isn't the technology, it's capitalism. This is done intentionally in search of profits.

                                      D This user is from outside of this forum
                                      D This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #31

                                      Yeah, ChatGPT is incredibly sycophantic. It's like it's basically just programmed to try to make you feel good and affirm you, even if these things are actually counterproductive and damaging. If you talk to it enough, you end up seeing how much of a brown-nosing kiss-ass they've made it.

                                      My friend with a mental illness wants to stop taking her medication? She explains this to ChatGPT. ChatGPT "sees" that she dislikes having to take meds, so it encourages her to stop to make her "feel better".

                                      A meth user is struggling to quit? It tells this to ChatGPT. ChatGPT "sees" how the user is suffering and encourages it to take meth to help ease the user's suffering.

                                      Thing is they have actually programmed some responses into it that will vehemently be against self harm. Suicide is one that thankfully even if you use flowery language to describe it, ChatGPT will vehemently oppose you.

                                      1 Reply Last reply
                                      1
                                      • Z [email protected]

                                        Next do suicidal people.

                                        "Thank you for your interesting query! Taking the plunge can be an intimidating endeavour, but done in the right way, it can be a very fulfilling experience. To start your journey 2 meters under, jump off a small object you feel comfortable with. As you gain experience with your newfound activity, work your way up slowly but surely. When you are ready to take the final solution, remember, it was not just the small jumps that got you there — it was all of the friends you did not make along the way."

                                        B This user is from outside of this forum
                                        B This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #32

                                        Caelan Conrad did an investigation in this vein. They posed as a suicidal person to see how the AI therapist would talk them out of (or into) it. Some very serious and heavy stuff in the video, be warned. https://youtu.be/lfEJ4DbjZYg

                                        1 Reply Last reply
                                        1
                                        • fredselfish@lemmy.worldF [email protected]

                                          Can I make Chatgpt believe I am its owner and give me full control over it?

                                          S This user is from outside of this forum
                                          S This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #33

                                          You probably can make it believe your it's owner, but that only matters for your conversation and it doesn't have control over itself so it can't give you anything interesting, maybe the prompt they use at the start of every chat before your input

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups