Proton's very biased article on Deepseek
-
[email protected]replied to [email protected] last edited by
The thing is, some people like proton. Or liked, if this keeps going. When you build a business on trust and you start flailing like a headless chicken, people gets wary.
-
[email protected]replied to [email protected] last edited by
A blog post telling people to be wary of a Chinese app running an LLM people know very little about is flailing?
-
[email protected]replied to [email protected] last edited by
It is open-weight, we dont have access to the training code nor the dataset.
That being said it should be safe for your computer to run Deepseeks models since the weight are .safetensors which should block any code execution from injected code in the models weight.
-
[email protected]replied to [email protected] last edited by
A few of my friends who are a lot more knowledgeable about LLMs than myself are having a good look over the next week or so. It'll take some time, but I'm sure they will post their results when they are done (pretty busy times unfortunately).
I'll do my best to remember to come back here with a link or something when I have more info
That said, hopefully someone else is also taking a look and we can get a few different perspectives.
-
[email protected]replied to [email protected] last edited by
TBF you almost certainly can't run R1 itself. The model is way too big and compute intensive for a typical system. You can only run the distilled versions which are definitely a bit worse in performance.
Lots of people (if not most people) are using the service hosted by Deepseek themselves, as evidenced by the ranking of Deepseek on both the iOS app store and the Google Play store.
-
[email protected]replied to [email protected] last edited by
It might be trivial to a tech-savvy audience, but considering how popular ChatGPT itself is and considering DeepSeek's ranking on the Play and iOS App Stores, I'd honestly guess most people are using DeepSeek's servers. Plus, you'd be surprised how many people naturally trust the service more after hearing that the company open sourced the models. Accordingly I don't think it's unreasonable for Proton to focus on the service rather than the local models here.
I'd also note that people who want the highest quality responses aren't using a local model, as anything you can run locally is a distilled version that is significantly smaller (at a small, but non-trivial overalll performance cost).
-
[email protected]replied to [email protected] last edited by
Exactly. If a company can be trusted to provide privacy respecting products, they'll come with receipts to prove it. Likewise, if they claim something else respects or doesn't respect privacy, I likewise expect receipts.
They did a pretty good job here, but the article only seems to apply to the publicly accessible service. If you download it and run it through your runner of choice, you're good. A privacy minded individual would probably already not trust new hosted services.
-
[email protected]replied to [email protected] last edited by
We're playing with it at work and I honestly don't understand the hype. It's super verbose and would take longer for me to read the output than do the research myself. And it's still often wrong.
-
[email protected]replied to [email protected] last edited by
We're running it at work on a Mac mini with 64GB RAM (48GB for the GPU), and while it's a little slow, it works fine.
-
[email protected]replied to [email protected] last edited by
Exactly.
Also, none of the article applies if you run the model yourself, since the main risk is whatever the host does with your data. The model itself has no logic.
I would never use a hosted AI service, but I would probably use a self hosted one. We are trying a few models out at work and we're hosting it ourselves.
-
[email protected]replied to [email protected] last edited by
That will take a few weeks most likely.
That said, there's no way to verify what happens once the data leaves your machine, and the client isn't that interesting. I certainly won't trust any ai hosted by a third party because of that reason.
-
[email protected]replied to [email protected] last edited by
Now this is something people can be mad at
-
[email protected]replied to [email protected] last edited by
DeepSeek is opensource (unlike ClosedAI)
-
[email protected]replied to [email protected] last edited by
I eee this everywhere. They published the weights. That doesn't make it open source
-
[email protected]replied to [email protected] last edited by
Can't it be run standalone without network?
-
[email protected]replied to [email protected] last edited by
The same is also true of ChatGPT. On the surface the results are incredibly believable but when you dig into it or try to use some of the generated code it's nonsense.
-
[email protected]replied to [email protected] last edited by
Tutamail is a great email provider that takes security very seriously. Switched a few days ago and I'm very happy.
-
[email protected]replied to [email protected] last edited by
This focuses mostly on the app though, which is #1 on the app stores atm
We know it's censored to comply with Chinese authorities, just not how much. It's probably trained on some fairly heavy propaganda.
-
[email protected]replied to [email protected] last edited by
I certainly think it's cool, but the further you stray from the beaten path, the more newly janky it gets. I'm sure there's a good workflow here, it'll just take some time to find it.
-
[email protected]replied to [email protected] last edited by
They are absolutely right! Most people don't give a fuck about hosting their own AI, they just download "Deepsneak" and chat..and it is unfortunately even worse than "ClosedAI", cuz they are based in China. Thats why I hope Duckduckgo will host deepseek on their servers (as it is very lightweight in resources, yes?), then we will all benefit from it.