Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. World News
  3. Musk Tried Censoring His AI Chatbot After Being Labeled 'Top Misinformation Spreader': 'I Stick to the Evidence,' Grok Says

Musk Tried Censoring His AI Chatbot After Being Labeled 'Top Misinformation Spreader': 'I Stick to the Evidence,' Grok Says

Scheduled Pinned Locked Moved World News
world
14 Posts 11 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ? Offline
    ? Offline
    Guest
    wrote on last edited by
    #1
    This post did not contain any content.
    R gnomesaiyan@lemmy.worldG jordanlund@lemmy.worldJ 3 Replies Last reply
    1
    0
    • System shared this topic on
    • ? Guest
      This post did not contain any content.
      R This user is from outside of this forum
      R This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2

      I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.

      sludgehammer@lemmy.worldS peppycito@sh.itjust.worksP runeko@programming.devR 3 Replies Last reply
      0
      • R [email protected]

        I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.

        sludgehammer@lemmy.worldS This user is from outside of this forum
        sludgehammer@lemmy.worldS This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        Or to put it another way, LLM's are advanced chatbots. Their purpose is to generate credible sounding text, not accurate text.

        chemical_cutthroat@lemmy.worldC 1 Reply Last reply
        0
        • sludgehammer@lemmy.worldS [email protected]

          Or to put it another way, LLM's are advanced chatbots. Their purpose is to generate credible sounding text, not accurate text.

          chemical_cutthroat@lemmy.worldC This user is from outside of this forum
          chemical_cutthroat@lemmy.worldC This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4

          But, like a human, it mostly tries to stick to the truth. It does get things wrong, and in that way is more like a 5 year old, because it won't understand that it is fabricating things, but there is a moral code that they are programmed with, and they do mostly stick to it.

          To write off an LLM as a glorified chatbot is disingenuous. They are capable of produce everything that a human is capable of, but in a different ratio. Instead of learning everything slowly over time and forming opinions based on experience, they are given all of the knowledge of humankind and told to sort it out themselves. Like a 5 year old with an encyclopedia set, they are gonna make some mistakes.

          Our problem is that we haven't found the right ratios for them. We aren't specializing the LLMs enough to make sure they have a limited enough library to pull from. If we made the datasets smaller and didn't force them into "chatbot" roles where they are given carte Blanche to say whatever they say, LLMs would be in a much better state than they currently are.

          R 1 Reply Last reply
          0
          • chemical_cutthroat@lemmy.worldC [email protected]

            But, like a human, it mostly tries to stick to the truth. It does get things wrong, and in that way is more like a 5 year old, because it won't understand that it is fabricating things, but there is a moral code that they are programmed with, and they do mostly stick to it.

            To write off an LLM as a glorified chatbot is disingenuous. They are capable of produce everything that a human is capable of, but in a different ratio. Instead of learning everything slowly over time and forming opinions based on experience, they are given all of the knowledge of humankind and told to sort it out themselves. Like a 5 year old with an encyclopedia set, they are gonna make some mistakes.

            Our problem is that we haven't found the right ratios for them. We aren't specializing the LLMs enough to make sure they have a limited enough library to pull from. If we made the datasets smaller and didn't force them into "chatbot" roles where they are given carte Blanche to say whatever they say, LLMs would be in a much better state than they currently are.

            R This user is from outside of this forum
            R This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            I wouldn't say that precipitating a statistically average response from a primordial soup of training data is really following a moral code or "trying to stick to the truth".

            Programmers and researchers can try as much as they want to get LLMs to behave as expected, but they're black boxes by nature.

            chemical_cutthroat@lemmy.worldC 1 Reply Last reply
            0
            • R [email protected]

              I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.

              peppycito@sh.itjust.worksP This user is from outside of this forum
              peppycito@sh.itjust.worksP This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #6

              My wife gets most of her news from the talking fish.

              1 Reply Last reply
              0
              • R [email protected]

                I wouldn't say that precipitating a statistically average response from a primordial soup of training data is really following a moral code or "trying to stick to the truth".

                Programmers and researchers can try as much as they want to get LLMs to behave as expected, but they're black boxes by nature.

                chemical_cutthroat@lemmy.worldC This user is from outside of this forum
                chemical_cutthroat@lemmy.worldC This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                Is that any different than a human moral code? We like to think we have some higher sense of "truth" but in reality we are only parroting that "facts" we hold as true. Through your history we have professed many things as truth. My favorite fact that I just learned yesterday is that we didn't discover oxygen until after the founding of the United States. Are the humans before 1776 any less human than us? Or are they trained on a limited data set, telling people that the "miasma" is the cause of all their woes?

                R 1 Reply Last reply
                0
                • ? Guest
                  This post did not contain any content.
                  gnomesaiyan@lemmy.worldG This user is from outside of this forum
                  gnomesaiyan@lemmy.worldG This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #8

                  Something tells me that if AI took over the world, we'd actually be okay.

                  T A 2 Replies Last reply
                  0
                  • R [email protected]

                    I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.

                    runeko@programming.devR This user is from outside of this forum
                    runeko@programming.devR This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #9

                    If Elon had a parrot that constantly said "Elon is a Nazi", it would be in the news.

                    quill7513@slrpnk.netQ 1 Reply Last reply
                    0
                    • runeko@programming.devR [email protected]

                      If Elon had a parrot that constantly said "Elon is a Nazi", it would be in the news.

                      quill7513@slrpnk.netQ This user is from outside of this forum
                      quill7513@slrpnk.netQ This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #10

                      you'd think, but he has a kid spouting off shit we're not talking enough about, and that kid's at the age where he's saying whatever his dad says

                      1 Reply Last reply
                      0
                      • chemical_cutthroat@lemmy.worldC [email protected]

                        Is that any different than a human moral code? We like to think we have some higher sense of "truth" but in reality we are only parroting that "facts" we hold as true. Through your history we have professed many things as truth. My favorite fact that I just learned yesterday is that we didn't discover oxygen until after the founding of the United States. Are the humans before 1776 any less human than us? Or are they trained on a limited data set, telling people that the "miasma" is the cause of all their woes?

                        R This user is from outside of this forum
                        R This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #11

                        Even humans with limited data have the ability to discover ground truths.

                        https://en.m.wikipedia.org/wiki/Scientific_method

                        The "reasoning" LLMs have a long way to go before they can be close to learning, understanding, and acting on information, if they ever get to that point.
                        In my opinion the LLM architecture is a dead-end in this respect.

                        1 Reply Last reply
                        0
                        • gnomesaiyan@lemmy.worldG [email protected]

                          Something tells me that if AI took over the world, we'd actually be okay.

                          T This user is from outside of this forum
                          T This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #12

                          Kinda like how self driving cars are still safer than the average driver, ya. Do they make mistakes? For sure, although the bigger annoyance is just how slow they are to turn sometimes. AI would be so so at leading but man is the bar low with Americans.

                          1 Reply Last reply
                          0
                          • gnomesaiyan@lemmy.worldG [email protected]

                            Something tells me that if AI took over the world, we'd actually be okay.

                            A This user is from outside of this forum
                            A This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #13

                            What? Are you insane?

                            1 Reply Last reply
                            0
                            • ? Guest
                              This post did not contain any content.
                              jordanlund@lemmy.worldJ This user is from outside of this forum
                              jordanlund@lemmy.worldJ This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #14

                              World does not accept internal US news. You want [email protected] or [email protected]

                              1 Reply Last reply
                              0
                              • System shared this topic on
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups