Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. ChatGPT hit with privacy complaint over defamatory hallucinations: ChatGPT created a fake child murderer.

ChatGPT hit with privacy complaint over defamatory hallucinations: ChatGPT created a fake child murderer.

Scheduled Pinned Locked Moved Technology
technology
11 Posts 9 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • tea@programming.devT This user is from outside of this forum
    tea@programming.devT This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1

    OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it. In many cases, these so-called “hallucinations” can seriously damage a person’s reputation: In the past, ChatGPT falsely accused people of corruption, child abuse – or even murder. The latter was the case with a Norwegian user. When he tried to find out if the chatbot had any information about him, ChatGPT confidently made up a fake story that pictured him as a convicted murderer. This clearly isn’t an isolated case. noyb has therefore filed its second complaint against OpenAI. By knowingly allowing ChatGPT to produce defamatory results, the company clearly violates the GDPR’s principle of data accuracy.

    E 1 Reply Last reply
    1
    0
    • System shared this topic on
    • tea@programming.devT [email protected]

      OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it. In many cases, these so-called “hallucinations” can seriously damage a person’s reputation: In the past, ChatGPT falsely accused people of corruption, child abuse – or even murder. The latter was the case with a Norwegian user. When he tried to find out if the chatbot had any information about him, ChatGPT confidently made up a fake story that pictured him as a convicted murderer. This clearly isn’t an isolated case. noyb has therefore filed its second complaint against OpenAI. By knowingly allowing ChatGPT to produce defamatory results, the company clearly violates the GDPR’s principle of data accuracy.

      E This user is from outside of this forum
      E This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2
      1. What does this have to do with privacy?
      2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
      B U B T einkorn@feddit.orgE 6 Replies Last reply
      0
      • E [email protected]
        1. What does this have to do with privacy?
        2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
        B This user is from outside of this forum
        B This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3
        1. You can ask Google to take down malicious requests for your name. With ChatGPT it's never guaranteed.
        2. ChatGPT is often used as a search engine so anything wrong it says IS spreading bullshit online.
        1 Reply Last reply
        0
        • E [email protected]
          1. What does this have to do with privacy?
          2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
          U This user is from outside of this forum
          U This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4
          1. Is this a quirk of the fediverse?

          The community this has been posted in for me is Technology, not Privacy

          2.And those people should also face scrutiny if they are making up potentially life ruining stuff such as accusing someone being a child murderer.
          The bit I'd want some context for, is whether this is a one off hallucination, or a consistent one that multiple seperate users could see if they asked about this person.

          If it's a one of hallucination, it's not good, but nowhere near as bad as a consistent 'hard baked' hallucination.

          donuts@lemmy.worldD B 2 Replies Last reply
          0
          • U [email protected]
            1. Is this a quirk of the fediverse?

            The community this has been posted in for me is Technology, not Privacy

            2.And those people should also face scrutiny if they are making up potentially life ruining stuff such as accusing someone being a child murderer.
            The bit I'd want some context for, is whether this is a one off hallucination, or a consistent one that multiple seperate users could see if they asked about this person.

            If it's a one of hallucination, it's not good, but nowhere near as bad as a consistent 'hard baked' hallucination.

            donuts@lemmy.worldD This user is from outside of this forum
            donuts@lemmy.worldD This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            OpenAI was hit was a privacy complaint, don't think the comment was about which community this was in

            1 Reply Last reply
            0
            • U [email protected]
              1. Is this a quirk of the fediverse?

              The community this has been posted in for me is Technology, not Privacy

              2.And those people should also face scrutiny if they are making up potentially life ruining stuff such as accusing someone being a child murderer.
              The bit I'd want some context for, is whether this is a one off hallucination, or a consistent one that multiple seperate users could see if they asked about this person.

              If it's a one of hallucination, it's not good, but nowhere near as bad as a consistent 'hard baked' hallucination.

              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #6

              The headline is what says there's a privacy complaint.

              1 Reply Last reply
              0
              • E [email protected]
                1. What does this have to do with privacy?
                2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
                B This user is from outside of this forum
                B This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                Despite what others are saying there is indeed an inaccuracy in calling this a privacy complaint.
                A lot of people outside of the EU conflate privacy with data protection, but they are not the same and GDPR does not concern with privacy but exclusively with personal data protection.

                Accuracy, availability and governance of personal data are indeed important criteria for data protection, and this is what this is about.

                Regarding people making shit up, if they make such things public, GDPR governs those just as much, while still referring to the normal legislation for the charges for slander.

                1 Reply Last reply
                0
                • E [email protected]
                  1. What does this have to do with privacy?
                  2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
                  T This user is from outside of this forum
                  T This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #8
                  1. It doesn't. I'm with you there.
                  2. Many countries in Europe have very strong anti-defamation laws, unlike in the US. What you are allowed to say about people is very different from what you are allowed to say about practically anything else. Since OpenAI is in control of the model, it is their responsibility to ensure it doesn't produce results like these.
                  1 Reply Last reply
                  0
                  • E [email protected]
                    1. What does this have to do with privacy?
                    2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
                    einkorn@feddit.orgE This user is from outside of this forum
                    einkorn@feddit.orgE This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #9
                    1. It can be viewed as part of your privacy to not be subject to defamation.
                    2. If it reaches the threshold of defamation it is punishable. Whether its enforcement is feasible is another matter.
                    1 Reply Last reply
                    0
                    • E [email protected]
                      1. What does this have to do with privacy?
                      2. People also make up shit all the time about other people. Many spread their bullshit online. ChatGPT does not.
                      thann@lemmy.dbzer0.comT This user is from outside of this forum
                      thann@lemmy.dbzer0.comT This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #10

                      This is litterally a story of it doing that...

                      E 1 Reply Last reply
                      0
                      • thann@lemmy.dbzer0.comT [email protected]

                        This is litterally a story of it doing that...

                        E This user is from outside of this forum
                        E This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #11

                        It is literally not. He chatted with it, it always gives some answer. This is not privacy related. It is made up in a private chat.

                        1 Reply Last reply
                        0
                        • System shared this topic on
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups