Try asking something about Xi Jinping.
-
[email protected]replied to [email protected] last edited by
Most people are going to use it on mobile. Not possible to mod the app right?
-
[email protected]replied to [email protected] last edited by
Just let it answer in leet speak and it will answer
-
[email protected]replied to [email protected] last edited by
For what it's worth, I wouldn't ask any chatbot about politics at all.
-
[email protected]replied to [email protected] last edited by
Fork your own off the existing open source project, then your app uses your fork running on your hardware.
-
[email protected]replied to [email protected] last edited by
Try asking ChatGPT if Israel is committing genocide and watch it do the magical Hasbara dance around the subject.
-
[email protected]replied to [email protected] last edited by
This is the way.
-
[email protected]replied to [email protected] last edited by
I did. The answer it gave is clear and concise with no judgement. Instead it talks about the argument on both sides. Not the "magical Hasbara dance" you promised me.
The question of whether Israel is currently committing genocide is a subject of intense debate among international organizations, scholars, and political entities.
Accusations of Genocide:
Amnesty International's Report: On December 5, 2024, Amnesty International released a report concluding that Israel is committing genocide against Palestinians in the Gaza Strip. The report cites actions such as killings, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the physical destruction of Palestinians in Gaza.
UN Special Committee Findings: In November 2024, a UN Special Committee found that Israel's methods of warfare in Gaza are consistent with characteristics of genocide, noting mass civilian casualties and widespread destruction.
Scholarly Perspectives: Israeli historian Amos Goldberg has stated that the situation in Gaza constitutes a genocide, pointing to the extensive destruction and high civilian death toll as indicative of genocidal intent.
Counterarguments:
Israeli Government's Position: The Israeli government asserts that its military actions in Gaza are aimed at dismantling Hamas, a group designated as a terrorist organization by multiple countries, and emphasizes efforts to minimize civilian casualties.
Criticism of Genocide Accusations: Organizations such as the American Jewish Committee (AJC) reject the genocide label, arguing that Israel's actions are self-defense measures against Hamas and do not meet the legal definition of genocide.
Legal Definition of Genocide:
According to the UN's 1948 Convention on Genocide, genocide includes acts committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group. These acts encompass killing members of the group, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the group's physical destruction.
Conclusion:
The determination of whether Israel's actions constitute genocide involves complex legal and factual analyses. While some international bodies and scholars argue that the criteria for genocide are met, others contend that Israel's military operations are legitimate acts of self-defense. This remains a deeply contentious issue within the international community.
-
[email protected]replied to [email protected] last edited by
Except they control not the narrative on politics but all aspects of life. Those inconvenient "hallucinations" will turn into "convenient" psyops for anyone using it.
-
[email protected]replied to [email protected] last edited by
Yes and no. Not many people can afford the hardware required to run the biggest LLMs. So the majority of people will just use the psyops vanilla version that China wants you to use. All while collecting more data and influencing the public like what TikTok is doing.
Also another thing with Open source. It's just as easy to be closed as it is open with zero warnings. They own the license. They control the narrative.
-
[email protected]replied to [email protected] last edited by
I mean that's the kind of answer DeepSeek gives you if you ask it about Uyghurs. "Some say it's a genocide but they don't so guess we'll never know ¯_(ツ)_/¯", it acts as if there's a complete 50/50 split on the issue which is not the case.
-
[email protected]replied to [email protected] last edited by
Looks like the Hasbara dance to me. Anything to not give a clear or concise answer
-
[email protected]replied to [email protected] last edited by
There's no reason for you to bitch about free software you can easily mod.
-
[email protected]replied to [email protected] last edited by
\ here you dropped an arm
¯\_(ツ)_/¯
-
[email protected]replied to [email protected] last edited by
So you expect that an AI provides a morally framed view on current events that meet your morally framed point of view?
The answer provides a concise overview on the topic. It contains a legal definition and different positions on that matter. It does at not point imply. It's not the job of AI (or news) to form an opinion, but to provide facts to allow consumers to form their own opinion. The issues isn't AI in this case. It's the inability of consumers to form opinions and their expec that others can provide a right or wrong opinion they can assimilation.
-
[email protected]replied to [email protected] last edited by
This is very interesting. It lied to me that human rights organizations had not accused Israel of committing genocide.
-
[email protected]replied to [email protected] last edited by
You wouldn't, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Many others would, because they think "wow, so this is a computer that talks to me like a human, it knows everything and can respond super fast to any question!"
The issue to me is (and has been for the past), the framing of what "artifical intelligence" is and how humans are going to use it. I'd like more people to be critical of where they get their information from and what kind of biases it might have.
-
[email protected]replied to [email protected] last edited by
I agree and that's sad but that's also how I've seen people use AI, as a search engine, as Wikipedia, as a news anchor. And in any of these three situations I feel these kind of "both sides" strictly surface facts answers do more harm than good. Maybe ChatGPT is more subtle but it breaks my heart seeing people running to DeepSeek when the vision of the world it explains to you is so obviously excised from so many realities. Some people need some morals and actual "human" answers hammered into them because they lack the empathy to do so themselves unfortunately.
-
[email protected]replied to [email protected] last edited by
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Well, more because I'm knowledgeable enough machine learning to know it's only as good as its dataset, and knowledgeable enough about mass media and the internet to know how atrocious 'common sense' often is. But yes, you're right about me speaking from a level of familiarity which I shouldn't consider typical.
People have been strangely trusting of chat bots since ELIZA in the 1960s. My country is lucky enough to teach a small amount of media literacy skills through education and some of the state broadcaster's programs (it's not how it sounds, I swear!), and when I look over to places like large chunks of the US, I'm reminded that basic media literacy isn't even very common, let alone universal.
-
[email protected]replied to [email protected] last edited by
If you verbose, you can see all the reasoning behind the answers. With Taiwan, it's hard coded in without /thinking