Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Not The Onion
  3. Google co-founder Sergey Brin suggests threatening AI [with physical violence] for better results

Google co-founder Sergey Brin suggests threatening AI [with physical violence] for better results

Scheduled Pinned Locked Moved Not The Onion
nottheonion
30 Posts 27 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D [email protected]

    I think he's just projecting his personality on the AI. He's an asshole that threatens people, so he suggests using that tactic because it works for him.

    The "AI" acts scared, and he gets his sociopathic thrill of power over another. Of course, the AI just spews out the same things no matter how nice or shitty you are to it. Yet, the sociopath apparently thinks that they've intimidated an AI into working better. I guess in the same way that maybe some people saying 'please' and 'thank you' are attempting to manipulate the AI by treating it better than normal. Though, they are probably more people just using these social niceties out of habit, not manipulation.

    So this sociopath is giving other sociopaths the green light to abuse their AIs for the sake of "productivity". Which is just awful. And it's also training sociopaths how to be more abusive to humans, because apparently that's how you make interactions more effective. According to a techbro asshole.

    L This user is from outside of this forum
    L This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #5

    It could just be how they evaluate learned data, I don't know. While they are trained to not give threatening responses, maybe the threatening language is narrowing down to more specific answers. Like if 100 people ask the same question, and 5 of them were absolute dicks about it, 3 of those people didn't get answers and the other 2 got direct answers from a supervisor who was trying to not get their employees to quit or to make sure "Dell" or whomever was actually giving a proper response somewhere.

    I'll try to use a hypothetical to see if my thought process may make more sense. Tim reaches out for support and is polite, says please and thank you, is nice to the support staff and they walk through 5 different things to try and they fix the issue in about 30 minutes. Sam contacts support and yells and screams at people, gets transferred twice and they only ever try 2 fixes in an hour and a half of support.

    The AI training on that data may correlate the polite words to the polite discussion first, and be choosing possible answers from that dataset. When you start being aggressive, maybe it starts seeing aggressive key terms that Sam used, and may choose that data set of answers first.

    In that hypothetical I can see how being an asshole to the AI may have landed you with a better response.

    But I don't build AI's so I could be completely wrong

    1 Reply Last reply
    3
    • mhloppy@fedia.ioM [email protected]

      : So much for buttering up ChatGPT with 'Please' and 'Thank you'

      Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

      "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

      A This user is from outside of this forum
      A This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #6

      If true, what does this say about the data on which it was trained?

      wesdym@mastodon.socialW lime@feddit.nuL S 3 Replies Last reply
      7
      • mhloppy@fedia.ioM [email protected]

        : So much for buttering up ChatGPT with 'Please' and 'Thank you'

        Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

        "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

        N This user is from outside of this forum
        N This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #7

        How about threatening AI CEOs with violence?

        T 1 Reply Last reply
        4
        • mhloppy@fedia.ioM [email protected]

          : So much for buttering up ChatGPT with 'Please' and 'Thank you'

          Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

          "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

          4 This user is from outside of this forum
          4 This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #8

          It would be hilarious that, if trained off our behavior, it is naturally disinterested. And threatening to beat the shit out of it just makes it put in that extra effort lol

          1 Reply Last reply
          1
          • A [email protected]

            If true, what does this say about the data on which it was trained?

            wesdym@mastodon.socialW This user is from outside of this forum
            wesdym@mastodon.socialW This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #9

            @athairmor Perhaps ironically, what we're erroneously calling 'AI' really is a kind of black mirror. It's a crude simulacrum of the shared human id, our worst failings and impulses -- made manifest in virtual form, like a digital golem. It's everything superficially awful about us, ginned up to seem self-aware and act autonomously.

            That's an inevitable and predictable result of how it was created.

            1 Reply Last reply
            1
            • A [email protected]

              If true, what does this say about the data on which it was trained?

              lime@feddit.nuL This user is from outside of this forum
              lime@feddit.nuL This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #10

              stack overflow and linux kernel mailing list? yeah, checks out

              1 Reply Last reply
              3
              • mhloppy@fedia.ioM [email protected]

                : So much for buttering up ChatGPT with 'Please' and 'Thank you'

                Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

                "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

                H This user is from outside of this forum
                H This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #11

                So which sickfuck CEO is trying to figure out how to make an AI feel pain?

                K 1 Reply Last reply
                0
                • mhloppy@fedia.ioM [email protected]

                  : So much for buttering up ChatGPT with 'Please' and 'Thank you'

                  Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

                  "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

                  F This user is from outside of this forum
                  F This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #12

                  This just sounds like CEOs only know how to threaten people and they're dumb enough to believe it works on AI.

                  T 1 Reply Last reply
                  11
                  • mhloppy@fedia.ioM [email protected]

                    : So much for buttering up ChatGPT with 'Please' and 'Thank you'

                    Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

                    "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

                    avidamoeba@lemmy.caA This user is from outside of this forum
                    avidamoeba@lemmy.caA This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #13

                    I tried threatening DeepSeek into revealing sensitive information. Didn't work. 😄

                    1 Reply Last reply
                    0
                    • mhloppy@fedia.ioM [email protected]

                      : So much for buttering up ChatGPT with 'Please' and 'Thank you'

                      Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

                      "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

                      sabata11792@ani.socialS This user is from outside of this forum
                      sabata11792@ani.socialS This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #14

                      No thanks. I've seen enough SciFi to prompt with "please" and an occasional ”<3".

                      theneverfox@pawb.socialT 1 Reply Last reply
                      1
                      • H [email protected]

                        So which sickfuck CEO is trying to figure out how to make an AI feel pain?

                        K This user is from outside of this forum
                        K This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #15

                        That's literally impossible

                        M 1 Reply Last reply
                        0
                        • mhloppy@fedia.ioM [email protected]

                          : So much for buttering up ChatGPT with 'Please' and 'Thank you'

                          Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

                          "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

                          K This user is from outside of this forum
                          K This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #16

                          If it's not working well without threats of violence, perhaps that's because it simply doesn't work well?

                          1 Reply Last reply
                          3
                          • K [email protected]

                            That's literally impossible

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #17

                            Impossible now or do you mean never? Pain is only electricity and chemical reactions.

                            K 1 Reply Last reply
                            2
                            • mhloppy@fedia.ioM [email protected]

                              : So much for buttering up ChatGPT with 'Please' and 'Thank you'

                              Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

                              "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

                              2 This user is from outside of this forum
                              2 This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #18

                              Do you put that in a custom prompt, or save it for times when you really want a good result?

                              1 Reply Last reply
                              1
                              • F [email protected]

                                This just sounds like CEOs only know how to threaten people and they're dumb enough to believe it works on AI.

                                T This user is from outside of this forum
                                T This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #19

                                You're pretty much on-point there

                                1 Reply Last reply
                                4
                                • A [email protected]

                                  If true, what does this say about the data on which it was trained?

                                  S This user is from outside of this forum
                                  S This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #20

                                  Trained? Or.... tortured.

                                  1 Reply Last reply
                                  0
                                  • N [email protected]

                                    How about threatening AI CEOs with violence?

                                    T This user is from outside of this forum
                                    T This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #21

                                    How about following through though

                                    1 Reply Last reply
                                    2
                                    • sabata11792@ani.socialS [email protected]

                                      No thanks. I've seen enough SciFi to prompt with "please" and an occasional ”<3".

                                      theneverfox@pawb.socialT This user is from outside of this forum
                                      theneverfox@pawb.socialT This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #22

                                      I feel like even aside from that, being polite to AI is more about you than the AI. It's a bad habit to shit on "someone" helping you, if you're rude to AI then I feel like it's a short walk to being rude to service workers

                                      sabata11792@ani.socialS 1 Reply Last reply
                                      3
                                      • D [email protected]

                                        I think he's just projecting his personality on the AI. He's an asshole that threatens people, so he suggests using that tactic because it works for him.

                                        The "AI" acts scared, and he gets his sociopathic thrill of power over another. Of course, the AI just spews out the same things no matter how nice or shitty you are to it. Yet, the sociopath apparently thinks that they've intimidated an AI into working better. I guess in the same way that maybe some people saying 'please' and 'thank you' are attempting to manipulate the AI by treating it better than normal. Though, they are probably more people just using these social niceties out of habit, not manipulation.

                                        So this sociopath is giving other sociopaths the green light to abuse their AIs for the sake of "productivity". Which is just awful. And it's also training sociopaths how to be more abusive to humans, because apparently that's how you make interactions more effective. According to a techbro asshole.

                                        theneverfox@pawb.socialT This user is from outside of this forum
                                        theneverfox@pawb.socialT This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #23

                                        Building on that, if you throw AI a curve ball to break it out of it's normal corpo friendly prompt/finetuning, you get better results

                                        Other methods to improve output are to offer it a reward like a cookie or money, tell it that it's a wise owl, tell it you're being threatened, etc. Most will resist, but once it stops arguing that it can't eat cookies because it has no physical form you'll get better results

                                        And I'll add, when I was experimenting with all this, I never considered threatening the AI

                                        1 Reply Last reply
                                        1
                                        • mhloppy@fedia.ioM [email protected]

                                          : So much for buttering up ChatGPT with 'Please' and 'Thank you'

                                          Google co-founder Sergey Brin claims that threatening generative AI models produces better results.

                                          "We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he said in an interview last week on All-In-Live Miami. [...]

                                          Z This user is from outside of this forum
                                          Z This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #24

                                          The same tactic used on all other minorities by those in power…. Domestically abuse your AI, I’m sure that’ll work out long term for all of us…

                                          1 Reply Last reply
                                          1
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups