DeepSeek AI raises national security concerns, U.S. officials say
-
[email protected]replied to [email protected] last edited by
Pretty sure they are talking about the app, not the model.
-
[email protected]replied to [email protected] last edited by
Deepseek used distillation, which is a way of extracting training information from other models through querying the model. In other words, some of the advances came from examining OpenAI’s models. Being first is hardest and took brute force.
-
[email protected]replied to [email protected] last edited by
Like TikTok. The national security threat is actually just fear of profit loss.
-
[email protected]replied to [email protected] last edited by
It's open source so why not just take the best parts of it and run it themselves if it is such a worry instead of relying on their app and website.
-
[email protected]replied to [email protected] last edited by
Deepseek will happily tell you anything you want about Tiananmen Square from any perspective you ask it with a little creative prompting.
Show me. You can get it to say some slightly vague things about it, but you can’t have it say “anything from any perspective”. Can’t we just leave out a conspiracy theory while discussing AI?
-
[email protected]replied to [email protected] last edited by
Hmm, what are the previous examples?
-
[email protected]replied to [email protected] last edited by
I only commented because you said something stupid about Tiananmen Square that chapped my hide. The rest of it is fine. I'm only responding again because you doubled-down. The CIA version? I was alive at the time and followed the news, including live TV reporting, for days.
Say what you want about the politics of US News channels at that time, they weren't all in lock step with the CIA. I watched as Zhao Ziyang visited with the hunger-striking students, and I watched as the tanks rolled in. Don't try to revise history because you need the US to be the #1 bad guy. China had a chance to reform, and they cracked down instead.
-
[email protected]replied to [email protected] last edited by
MVP in Technology. OpenAI just sat around throwing salt to the wind piling up "value" until they can convince people it is worth some obscene amount of money to sell out. Once you give someone a literal milestone and show them the path, boom.
This really really feels like a real life Tortoise and the Hare story. Like real hard, and I don't feel the least bit bad for the hare.
-
[email protected]replied to [email protected] last edited by
Like I said, it just takes creative prompting
-
[email protected]replied to [email protected] last edited by
Electric cars are a recent example
-
This is a decent read concerning the effectiveness of the systems designed to prevent abuse.
-
[email protected]replied to [email protected] last edited by
Hmm, I was talking about local running, but apparently that's just the non-distilled version, and someone got a very straightforward explanation with the 14B distilled version.
-
[email protected]replied to [email protected] last edited by
"Internally"
This guy really loves his oligarchs and their government haha
-
[email protected]replied to [email protected] last edited by
They're pissy cause it being open source and more efficient means that it's gonna be more cost effective for people to use. Which is real bad if your company overcommitted to the slop and needs to recover losses.
-
[email protected]replied to [email protected] last edited by
It sets bad precedent that the owner class doesn't like.
Either way US government will need to either nationalize or break up these mega corps. This can't go on.
-
[email protected]replied to [email protected] last edited by
Then you should've specified that those were the parameters you wanted. Answers and thought processes will vary based on the prompt provided.
My point is that you can still use creative prompting to get answers you want that should be blocked due to its safety constraints. My point isn't that there's no guidelines to work around.
I'm not an AI researcher nor do I work professionally with AI so I'm not familiar with 100% of the background processes involved with these LLMs but if the question is "can you get Deepseek to talk about Tiananmen Square" then the answer is yes.
-
[email protected]replied to [email protected] last edited by
I don't understand why you're getting downvoted. Labor laws in China are shit. A ton of people there work way more than 40 hours a week for less money than US Americans get, live on company "campuses", and have suicide nets.
-
[email protected]replied to [email protected] last edited by
Many people online have been radicalised into thinking they have to be 100% for side A or for side B.
When you put any criticism towards A or B, the supporters go absolutely wild. They will deny any problems with the side they've chosen.
-
[email protected]replied to [email protected] last edited by
This.
I can easily see the national security argument for people sending queries to CCP-controlled servers (unfortunately people put all kinds of sensitive information into prompts).
Whether people like it or not, that is potentially risky. I don't know if China has blocked OpenAI-hosted stuff, but I wouldn't be surprised if they have for similar reasons.
But attempting any bans the model itself, even when ran locally, would be conclusive evidence that they're doing it just to harm a competitor.
-
[email protected]replied to [email protected] last edited by
You did something cheaper quicker and it's more efficient it must be bad the US