Try asking something about Xi Jinping.
-
[email protected]replied to [email protected] last edited by
So you expect that an AI provides a morally framed view on current events that meet your morally framed point of view?
The answer provides a concise overview on the topic. It contains a legal definition and different positions on that matter. It does at not point imply. It's not the job of AI (or news) to form an opinion, but to provide facts to allow consumers to form their own opinion. The issues isn't AI in this case. It's the inability of consumers to form opinions and their expec that others can provide a right or wrong opinion they can assimilation.
-
[email protected]replied to [email protected] last edited by
This is very interesting. It lied to me that human rights organizations had not accused Israel of committing genocide.
-
[email protected]replied to [email protected] last edited by
You wouldn't, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Many others would, because they think "wow, so this is a computer that talks to me like a human, it knows everything and can respond super fast to any question!"
The issue to me is (and has been for the past), the framing of what "artifical intelligence" is and how humans are going to use it. I'd like more people to be critical of where they get their information from and what kind of biases it might have.
-
[email protected]replied to [email protected] last edited by
I agree and that's sad but that's also how I've seen people use AI, as a search engine, as Wikipedia, as a news anchor. And in any of these three situations I feel these kind of "both sides" strictly surface facts answers do more harm than good. Maybe ChatGPT is more subtle but it breaks my heart seeing people running to DeepSeek when the vision of the world it explains to you is so obviously excised from so many realities. Some people need some morals and actual "human" answers hammered into them because they lack the empathy to do so themselves unfortunately.
-
[email protected]replied to [email protected] last edited by
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Well, more because I'm knowledgeable enough machine learning to know it's only as good as its dataset, and knowledgeable enough about mass media and the internet to know how atrocious 'common sense' often is. But yes, you're right about me speaking from a level of familiarity which I shouldn't consider typical.
People have been strangely trusting of chat bots since ELIZA in the 1960s. My country is lucky enough to teach a small amount of media literacy skills through education and some of the state broadcaster's programs (it's not how it sounds, I swear!), and when I look over to places like large chunks of the US, I'm reminded that basic media literacy isn't even very common, let alone universal.
-
[email protected]replied to [email protected] last edited by
If you verbose, you can see all the reasoning behind the answers. With Taiwan, it's hard coded in without /thinking
-
[email protected]replied to [email protected] last edited by
You're expecting an opinion. It's an AI chatbot. Not a moral compass. It lays out facts and you make the determination.
-
[email protected]replied to [email protected] last edited by
When there is free software, the user is the product. It's just a psyops tool disguised as a FOSS.
-
[email protected]replied to [email protected] last edited by
Not everyone can afford hardware that can support a 67B LLM. You're talking top tier hardware.
-
[email protected]replied to [email protected] last edited by
True, but one is a situation, and the other is a person. I didn't know that the existence of Xi Jinping was a controversial idea in China...
-
[email protected]replied to [email protected] last edited by
The official hosting of it has censorship applied after the answer is generated, but from what I heard the locally run version has no censorship even though they could have theoretically trained it to.
-
[email protected]replied to [email protected] last edited by
How are you the product if you can download, mod, and control every part of it?
Ever heard of WinRAR?
Audacity?
VLC media player?
Libre office?
Gimp?
Fruitloops?
Deluge?Literally any free open source standalone software ever made?
Just admit that you aren't capable of approaching this subject unbiasly.
-
[email protected]replied to [email protected] last edited by
You just named Western FOSS companies and completely ignored the "psyops" part. This is a Chinese psyops tool disguised as a FOSS.
99.9999999999999999999% can't afford or have the ability to download and mod their own 67B model. The vast majority of the people who will use it will be using Deepseek vanilla servers. They can collect a mass amount of data and also control the narrative on what is truth or not. Think TikTok but on a work computer.
-
[email protected]replied to [email protected] last edited by
Whine more about free shit
I'm blocking you now
-
[email protected]replied to [email protected] last edited by
Bye Tankie.
-
[email protected]replied to [email protected] last edited by
AI chatbots do not lay out facts
-
[email protected]replied to [email protected] last edited by
Well, that's the intent at least. Not to form an opinion.
-
[email protected]replied to [email protected] last edited by
If you're of the idea that it's not a genocide you're wrong. There is no alternate explanation. If it were giving a fact that would be correct. The fact that it's giving both sides is an opinion rather than a fact.
If their ibtebtion was fact only. The answer would have been yes
-
[email protected]replied to [email protected] last edited by
You're arguing with an AI. It's a computer. It doesn't have an opinion. It gives perspective on both sides and you determine an answer. Just because you have more conviction it doesn't make the AI formulate an opinion.