The questions the Chinese government doesn’t want DeepSeek AI to answer
-
[email protected]replied to [email protected] last edited by
Awesome, thanks for that.
This is why so many people just don't read the article, concise communication is a lost art.
-
[email protected]replied to [email protected] last edited by
Because USA did exactly the same in the past. Take all the designs and ideas from somewhere else to build themselves up, then get ahead and turn around and forbid everyone else to use their stuff.
China is now doing following that same plan. It wasn't even hard. All they had to do was to say to give the designs and they'll producer them cheaper. Now they're the factory of the world. We knew this was going to happen but short term profit before long term consequences, right? A few tariffs aren't going to stop this.
-
[email protected]replied to [email protected] last edited by
Couldn't make an article out of two sentences otherwise.
-
The irony is that the US is following China's path.
-
[email protected]replied to [email protected] last edited by
I personally think it's good that the USA did it back then and I think it's good that China does it now.
-
all the downvotes confirm the ccp is here
Not all who disagree with you are paid by a government. Sometimes people just think your take is bad.
-
[email protected]replied to [email protected] last edited by
AI summary:
The article discusses the Chinese government's influence on DeepSeek AI, a model developed in China. PromptFoo, an AI engineering and evaluation firm, tested DeepSeek with 1,156 prompts on sensitive topics in China, such as Taiwan, Tibet, and the Tiananmen Square protests. They found that 85% of the responses were "canned refusals" promoting the Chinese government's views. However, these restrictions can be easily bypassed by omitting China-specific terms or using benign contexts. Ars Technica's spot-checks revealed inconsistencies in how these restrictions are enforced. While some prompts were blocked, others received detailed responses.
(I'd add that the canned refusals stated "Any actions that undermine national sovereignty and territorial integrity will be resolutely opposed by all Chinese people and are bound to be met with failure,". Also that while other chat models will refuse to explain things like how to hotwire a car, DeepSky gave a "general, theoretical overview" of the steps involved (while also noting the illegality of following those steps in real life).
-
[email protected]replied to [email protected] last edited by
I mean it’s pretty obvious, isn’t it? Anything regarding Chinese politics or recent history is a big no-no. Like it will tell you who the president of the US is but will refuse to tell you about the head of state in China. I’m assuming same goes for anything Taiwan or South Chinese sea. The self censorship is rather broad.
-
[email protected]replied to [email protected] last edited by
It's not really good or bad, but the natural way of things. Nobody can stay top dog forever. Eventually greedy idiots come into power overplaying their hand, circumstances change, tech evolves and somebody else takes first place
-
[email protected]replied to [email protected] last edited by
I made a comment to a beehaw post about something similar, I should make it a post so the .world can see it.
I've been running the 14B distilled model, based on Ali Baba's Qwen2 model, but distilled by R1 and given it's chain of thought ability. You can run it locally with Ollama and download it from their site.
That version has a couple of odd quirks, like the first interaction in a new session seems much more prone triggering a generic brush-off response. But subsequent responses I've noticed very few guardrails.
I got it to right a very harsh essay on Tiananmen Square, tell me how to make gunpowder (very generally, the 14B model doesn't appear to have as much data available in some fields, like chemistry), offer very balanced views on Isreal and Palestine, and a few other spicy responses.
At one point though I did get a very odd and suspicious message out of it regarding the "Realis" group within China and how the government always treats them very fairly. It misread "Isrealis" and apparently got defensive about something else entirely.
-
[email protected]replied to [email protected] last edited by
Why does ChatGPT, anti-libre software, stealing our data and control over our computing?
Does it answer this question?
-
I can change china for the US in that sentence and it will still make sense