DeepSeek Proves It: Open Source is the Secret to Dominating Tech Markets (and Wall Street has it wrong).
-
-
-
the accepted terminology
No, it isn't. The OSI specifically requires the training data be available or at very least that the source and fee for the data be given so that a user could get the same copy themselves. Because that's the purpose of something being "open source". Open source doesn't just mean free to download and use.
https://opensource.org/ai/open-source-ai-definition
Data Information: Sufficiently detailed information about the data used to train the system so that a skilled person can build a substantially equivalent system. Data Information shall be made available under OSI-approved terms.
In particular, this must include: (1) the complete description of all data used for training, including (if used) of unshareable data, disclosing the provenance of the data, its scope and characteristics, how the data was obtained and selected, the labeling procedures, and data processing and filtering methodologies; (2) a listing of all publicly available training data and where to obtain it; and (3) a listing of all training data obtainable from third parties and where to obtain it, including for fee.
As per their paper, DeepSeek R1 required a very specific training data set because when they tried the same technique with less curated data, they got R"zero' which basically ran fast and spat out a gibberish salad of English, Chinese and Python.
People are calling DeepSeek open source purely because they called themselves open source, but they seem to just be another free to download, black-box model. The best comparison is to Meta's LlaMa, which weirdly nobody has decided is going to up-end the tech industry.
In reality "open source" is a terrible terminology for what is a very loose fit when basically trying to say that anyone could recreate or modify the model because they have the exact 'recipe'.
-
Well maybe. Apparntly some folks are already doing that but its not done yet. Let's wait for the results. If everything is legit we should have not one but plenty of similar and better models in near future. If Chinese did this with 100 chips imagine what can be done with 100000 chips that nvidia can sell to a us company
-
-
-
the accepted terminology nowadays
Let's just redefine existing concepts to mean things that are more palatable to corporate control why don't we?
If you don't have the ability to build it yourself, it's not open source. Deepseek is "freeware" at best. And that's to say nothing of what the data is, where it comes from, and the legal ramifications of using it.
-
Snowden really proved he wasn't a Russian spy when he check notes immediately fled to Russia with troves of American secrets...
-
-
-
I think it's both. OpenAI was valued at a certain point because of a perceived moat of training costs. The cheapness killed the myth, but open sourcing it was the coup de grace as they couldn't use the courts to put the genie back into the bottle.
-
True, but I'm of the belief that we'll probably see a continuation of the existing trend of building and improving upon existing models, rather than always starting entirely from scratch. For instance, you'll almost always see nearly any newly released model talk about the performance of their Llama version, because it just produces better results when you combine it with the existing quality of Llama.
I think we'll see a similar trend now, just with R1 variants instead of Llama variants being the primary new type used. It's just fundamentally inefficient to start over from scratch every time, so it makes sense that newer iterations would be built directly on previous ones.
-
The model weights and research paper are
I think you're conflating "open source" with "free"
What does it even mean for a research paper to be open source? That they release a docx instead of a pdf, so people can modify the formatting? Lol
The model weights were released for free, but you don't have access to their source, so you can't recreate them yourself. Like Microsoft Paint isn't open source just because they release the machine instructions for free. Model weights are the AI equivalent of an exe file. To extend that analogy, quants, LORAs, etc are like community-made mods.
To be open source, they would have to release the training data and the code used to train it. They won't do that because they don't want competition. They just want to do the facebook llama thing, where they hope someone uses it to build the next big thing, so that facebook can copy them and destroy them with a much better model that they didn't release, force them to sell, or kill them with the license.
-
There's so much misinfo spreading about this, and while I don't blame you for buying it, I do blame you for spreading it. "It sounds legit" is not how you should decide to trust what you read. Many people think the earth is flat because the conspiracy theories sound legit to them.
DeepSeek probably did lie about a lot of things, but their results are not disputed. R1 is competitive with leading models, it's smaller, and it's cheaper. The good results are definitely not from "sheer chip volume and energy used", and American AI companies could have saved a lot of money if they had used those same techniques.
-
Ah, cool, a new account to block.
-
-
-
Governments and corporations still use the same playbooks because they're still oversaturated with Boomers who haven't learned a lick since 1987.
-
-
Not exactly sure of what "dominating" a market means, but the title is on a good point: innovation requires much more cooperation than competition. And the 'AI race' between nations is an antiquated mainframe pushed by media.