Proton's very biased article on Deepseek
-
[email protected]replied to [email protected] last edited by
What are the minimum system requirements to run something like deepseek on your own computer in some kind of firewall container?
-
[email protected]replied to [email protected] last edited by
Is the chatbot interface that uses the model open source? If you self-host will it try to send data home?
-
[email protected]replied to [email protected] last edited by
Yes. The entire thing is open source. That's the thing and why you're here asking questions.
-
[email protected]replied to [email protected] last edited by
That's cool, I hope someone writes an article about how it works
-
[email protected]replied to [email protected] last edited by
It's Open Source. Don't need an article.
-
[email protected]replied to [email protected] last edited by
No I mean for someone to read the source and explain what they found or didn't find
-
[email protected]replied to [email protected] last edited by
There are plenty of ways and they are all safe. Don't think of DeepSeek as anything more than a (extremely large, like bigger than a AAA) videogame. It does take resources, e.g disk space and RAM and GPU VRAM (if you have some) but you can use "just" the weights and thus the executable might come from another project, an open-source one that will not "phone home" (assuming that's your worry).
I detail this kind of things and more in https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but to be more pragmatic I'd recommend
ollama
which supports https://ollama.com/library/deepseek-r1So, assuming you have a relatively entry level computer you can install
ollama
thenollama run deepseek-r1:1.5b
and try. -
[email protected]replied to [email protected] last edited by
FWIW I did just try
deepseek-r1:1.5b
(the smallest model available viaollama
today) and ... not bad at all for 1.1Gb!It's still AI BS generating slop without "thinking" at all ... but from the few tests I ran, it might be one of the "least worst" smaller model I tried.
-
[email protected]replied to [email protected] last edited by
They explicitly said the Republicans were on the side of the little guy. I probably don't need to explain the awful shit that they're doing.
Saying they're "fighting for the little guys" while at the same time shitting on their political opponent is a clear show of support, and a clear show of bias.
-
[email protected]replied to [email protected] last edited by
Why do they even have to give their goddamn opinion? Who asked? Why should they car
-
[email protected]replied to [email protected] last edited by
Surely Proton's own AI is without any of these problems... https://proton.me/blog/proton-scribe-writing-assistant
-
[email protected]replied to [email protected] last edited by
You could write this exact article about openai too
-
[email protected]replied to [email protected] last edited by
Of course it's biased. One company writing about another company is always biased. Imagine mods of one community collectively writing a post about another community, would the fact alone not be enough? Or admins of one instance about another.
It was common sense when I as a kid went online, writing all manners of awfully stupid things memories of which still haunt me today.
You'd be friendly and respectful with all people around you on the same forums and chats. But never ever would you believe them when they tell you what to think about something.
We live in a strange time when instead of applying this simple rule people are looking for mechanisms like karma or fact-checking or even market share to allow themselves to uncritically believe some stuff.
-
[email protected]replied to [email protected] last edited by
Given that you can download Deepseek, customize it, and run it offline in your own secure environment, it is actually almost irrelevant how people feel about China. None of that data goes back to them.
That's why I find all the "it comes from China, therefore it is a trap" rhetoric to be so annoying, and frankly dangerous for international relations.
Compare this to OpenAI, where your only option is to use the US-hosted version, where it is under the jurisdiction of a president who has no care for privacy protection.
-
[email protected]replied to [email protected] last edited by
This is true. However, Proton's big sell is that they can be trusted to be truthful about what is safe and what is not safe for your privacy.
I think given the context of the CEO's personal bias towards current US Republicans, and given that those Republicans are aggressively anti-China, when Proton releases an article warning of a successful Chinese AI, and seemingly purposefully leaves out the part about how people are already running it securely, it starts raising some important questions about their alignment.
-
[email protected]replied to [email protected] last edited by
Proton’s big sell is that they can be trusted to be truthful about what is safe and what is not safe for your privacy.
Which somebody who can be trusted wouldn't ever do.
Businesses sell goods, services, deals, not truth.
And privacy is not about trust.
-
[email protected]replied to [email protected] last edited by
The thing is, some people like proton. Or liked, if this keeps going. When you build a business on trust and you start flailing like a headless chicken, people gets wary.
-
[email protected]replied to [email protected] last edited by
A blog post telling people to be wary of a Chinese app running an LLM people know very little about is flailing?
-
[email protected]replied to [email protected] last edited by
It is open-weight, we dont have access to the training code nor the dataset.
That being said it should be safe for your computer to run Deepseeks models since the weight are .safetensors which should block any code execution from injected code in the models weight.
-
[email protected]replied to [email protected] last edited by
A few of my friends who are a lot more knowledgeable about LLMs than myself are having a good look over the next week or so. It'll take some time, but I'm sure they will post their results when they are done (pretty busy times unfortunately).
I'll do my best to remember to come back here with a link or something when I have more info
That said, hopefully someone else is also taking a look and we can get a few different perspectives.