DeepSeek AI raises national security concerns, U.S. officials say
-
Most corporations will hand over the data without question whenever the government asks without a warrant.
-
I'm not here to discuss the validity of Tiananmen Square, that was just the example I keep seeing used.
Why does it matter if one source doesn't provide the official CIA story? You can look up how America views that event anywhere.
How is that censorship any worse than US tech companies blocking you from being able to say the word "Republican" in a negative context?
Also, you left out the most important part "without a little effort." Deepseek will happily tell you anything you want about Tiananmen Square from any perspective you ask it with a little creative prompting.
-
Pretty sure they are talking about the app, not the model.
-
Deepseek used distillation, which is a way of extracting training information from other models through querying the model. In other words, some of the advances came from examining OpenAI’s models. Being first is hardest and took brute force.
-
Like TikTok. The national security threat is actually just fear of profit loss.
-
-
-
-
I only commented because you said something stupid about Tiananmen Square that chapped my hide. The rest of it is fine. I'm only responding again because you doubled-down. The CIA version? I was alive at the time and followed the news, including live TV reporting, for days.
Say what you want about the politics of US News channels at that time, they weren't all in lock step with the CIA. I watched as Zhao Ziyang visited with the hunger-striking students, and I watched as the tanks rolled in. Don't try to revise history because you need the US to be the #1 bad guy. China had a chance to reform, and they cracked down instead.
-
MVP in Technology. OpenAI just sat around throwing salt to the wind piling up "value" until they can convince people it is worth some obscene amount of money to sell out. Once you give someone a literal milestone and show them the path, boom.
This really really feels like a real life Tortoise and the Hare story. Like real hard, and I don't feel the least bit bad for the hare.
-
Like I said, it just takes creative prompting
-
Electric cars are a recent example
-
This is a decent read concerning the effectiveness of the systems designed to prevent abuse.
-
Hmm, I was talking about local running, but apparently that's just the non-distilled version, and someone got a very straightforward explanation with the 14B distilled version.
-
"Internally"
This guy really loves his oligarchs and their government haha
-
They're pissy cause it being open source and more efficient means that it's gonna be more cost effective for people to use. Which is real bad if your company overcommitted to the slop and needs to recover losses.
-
It sets bad precedent that the owner class doesn't like.
Either way US government will need to either nationalize or break up these mega corps. This can't go on.
-
Then you should've specified that those were the parameters you wanted. Answers and thought processes will vary based on the prompt provided.
My point is that you can still use creative prompting to get answers you want that should be blocked due to its safety constraints. My point isn't that there's no guidelines to work around.
I'm not an AI researcher nor do I work professionally with AI so I'm not familiar with 100% of the background processes involved with these LLMs but if the question is "can you get Deepseek to talk about Tiananmen Square" then the answer is yes.
-
I don't understand why you're getting downvoted. Labor laws in China are shit. A ton of people there work way more than 40 hours a week for less money than US Americans get, live on company "campuses", and have suicide nets.
-
Many people online have been radicalised into thinking they have to be 100% for side A or for side B.
When you put any criticism towards A or B, the supporters go absolutely wild. They will deny any problems with the side they've chosen.