DeepSeek Proves It: Open Source is the Secret to Dominating Tech Markets (and Wall Street has it wrong).
-
-
-
I think it's both. OpenAI was valued at a certain point because of a perceived moat of training costs. The cheapness killed the myth, but open sourcing it was the coup de grace as they couldn't use the courts to put the genie back into the bottle.
-
True, but I'm of the belief that we'll probably see a continuation of the existing trend of building and improving upon existing models, rather than always starting entirely from scratch. For instance, you'll almost always see nearly any newly released model talk about the performance of their Llama version, because it just produces better results when you combine it with the existing quality of Llama.
I think we'll see a similar trend now, just with R1 variants instead of Llama variants being the primary new type used. It's just fundamentally inefficient to start over from scratch every time, so it makes sense that newer iterations would be built directly on previous ones.
-
The model weights and research paper are
I think you're conflating "open source" with "free"
What does it even mean for a research paper to be open source? That they release a docx instead of a pdf, so people can modify the formatting? Lol
The model weights were released for free, but you don't have access to their source, so you can't recreate them yourself. Like Microsoft Paint isn't open source just because they release the machine instructions for free. Model weights are the AI equivalent of an exe file. To extend that analogy, quants, LORAs, etc are like community-made mods.
To be open source, they would have to release the training data and the code used to train it. They won't do that because they don't want competition. They just want to do the facebook llama thing, where they hope someone uses it to build the next big thing, so that facebook can copy them and destroy them with a much better model that they didn't release, force them to sell, or kill them with the license.
-
There's so much misinfo spreading about this, and while I don't blame you for buying it, I do blame you for spreading it. "It sounds legit" is not how you should decide to trust what you read. Many people think the earth is flat because the conspiracy theories sound legit to them.
DeepSeek probably did lie about a lot of things, but their results are not disputed. R1 is competitive with leading models, it's smaller, and it's cheaper. The good results are definitely not from "sheer chip volume and energy used", and American AI companies could have saved a lot of money if they had used those same techniques.
-
Ah, cool, a new account to block.
-
-
-
Governments and corporations still use the same playbooks because they're still oversaturated with Boomers who haven't learned a lick since 1987.
-
-
Not exactly sure of what "dominating" a market means, but the title is on a good point: innovation requires much more cooperation than competition. And the 'AI race' between nations is an antiquated mainframe pushed by media.
-
I view it as the source code of the model is the training data. The code supplied is a bespoke compiler for it, which emits a binary blob (the weights). A compiler is written in code too, just like any other program. So what they released is the equivalent of the compiler's source code, and the binary blob that it output when fed the training data (source code) which they did NOT release.
-
Yeah. Steam and I are getting older. Would be nice to adjust simple things like text size in the tool.
-
Didnt it turn out that they used 10000 nvidia cards that had the 100er Chips, and the "low level success" is a lie?
-
There's actually a typo, i wrote "relies or bullshit" instead of " on bullshit"
-
-
I would say that in comparison to the standards used for top ML conferences, the paper is relatively light on the details. But nonetheless some folks have been able to reimplement portions of their techniques.
ML in general has a reproducibility crisis. Lots of papers are extremely hard to reproduce, even if they're open source, since the optimization process is partly random (ordering of batches, augmentations, nondeterminism in GPUs etc.), and unfortunately even with seeding, the randomness is not guaranteed to be consistent across platforms.
-
-
You’re right. That’s it!