Musk Tried Censoring His AI Chatbot After Being Labeled 'Top Misinformation Spreader': 'I Stick to the Evidence,' Grok Says
-
-
I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.
-
I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.
Or to put it another way, LLM's are advanced chatbots. Their purpose is to generate credible sounding text, not accurate text.
-
Or to put it another way, LLM's are advanced chatbots. Their purpose is to generate credible sounding text, not accurate text.
But, like a human, it mostly tries to stick to the truth. It does get things wrong, and in that way is more like a 5 year old, because it won't understand that it is fabricating things, but there is a moral code that they are programmed with, and they do mostly stick to it.
To write off an LLM as a glorified chatbot is disingenuous. They are capable of produce everything that a human is capable of, but in a different ratio. Instead of learning everything slowly over time and forming opinions based on experience, they are given all of the knowledge of humankind and told to sort it out themselves. Like a 5 year old with an encyclopedia set, they are gonna make some mistakes.
Our problem is that we haven't found the right ratios for them. We aren't specializing the LLMs enough to make sure they have a limited enough library to pull from. If we made the datasets smaller and didn't force them into "chatbot" roles where they are given carte Blanche to say whatever they say, LLMs would be in a much better state than they currently are.
-
But, like a human, it mostly tries to stick to the truth. It does get things wrong, and in that way is more like a 5 year old, because it won't understand that it is fabricating things, but there is a moral code that they are programmed with, and they do mostly stick to it.
To write off an LLM as a glorified chatbot is disingenuous. They are capable of produce everything that a human is capable of, but in a different ratio. Instead of learning everything slowly over time and forming opinions based on experience, they are given all of the knowledge of humankind and told to sort it out themselves. Like a 5 year old with an encyclopedia set, they are gonna make some mistakes.
Our problem is that we haven't found the right ratios for them. We aren't specializing the LLMs enough to make sure they have a limited enough library to pull from. If we made the datasets smaller and didn't force them into "chatbot" roles where they are given carte Blanche to say whatever they say, LLMs would be in a much better state than they currently are.
I wouldn't say that precipitating a statistically average response from a primordial soup of training data is really following a moral code or "trying to stick to the truth".
Programmers and researchers can try as much as they want to get LLMs to behave as expected, but they're black boxes by nature.
-
I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.
My wife gets most of her news from the talking fish.
-
I wouldn't say that precipitating a statistically average response from a primordial soup of training data is really following a moral code or "trying to stick to the truth".
Programmers and researchers can try as much as they want to get LLMs to behave as expected, but they're black boxes by nature.
Is that any different than a human moral code? We like to think we have some higher sense of "truth" but in reality we are only parroting that "facts" we hold as true. Through your history we have professed many things as truth. My favorite fact that I just learned yesterday is that we didn't discover oxygen until after the founding of the United States. Are the humans before 1776 any less human than us? Or are they trained on a limited data set, telling people that the "miasma" is the cause of all their woes?
-
Something tells me that if AI took over the world, we'd actually be okay.
-
I get that Grok has more credibility than Elmo at this point, but stuff that a chatbot says is no more newsworthy than stuff said by a parrot.
If Elon had a parrot that constantly said "Elon is a Nazi", it would be in the news.
-
If Elon had a parrot that constantly said "Elon is a Nazi", it would be in the news.
you'd think, but he has a kid spouting off shit we're not talking enough about, and that kid's at the age where he's saying whatever his dad says
-
Is that any different than a human moral code? We like to think we have some higher sense of "truth" but in reality we are only parroting that "facts" we hold as true. Through your history we have professed many things as truth. My favorite fact that I just learned yesterday is that we didn't discover oxygen until after the founding of the United States. Are the humans before 1776 any less human than us? Or are they trained on a limited data set, telling people that the "miasma" is the cause of all their woes?
Even humans with limited data have the ability to discover ground truths.
https://en.m.wikipedia.org/wiki/Scientific_method
The "reasoning" LLMs have a long way to go before they can be close to learning, understanding, and acting on information, if they ever get to that point.
In my opinion the LLM architecture is a dead-end in this respect. -
Something tells me that if AI took over the world, we'd actually be okay.
Kinda like how self driving cars are still safer than the average driver, ya. Do they make mistakes? For sure, although the bigger annoyance is just how slow they are to turn sometimes. AI would be so so at leading but man is the bar low with Americans.
-
Something tells me that if AI took over the world, we'd actually be okay.
What? Are you insane?
-
World does not accept internal US news. You want [email protected] or [email protected]
-