It still rely on nvidia hardware why would it trigger a sell-off?
-
[email protected]replied to [email protected] last edited by
That's becoming less true. The cost of inference has been rising with bigger models, and even more so with "reasoning models".
Regardless, at the scale of 100M users, big one-off costs start looking small.
-
[email protected]replied to [email protected] last edited by
But I’d imagine any Chinese operator will handle scale much better? Or?
-
[email protected]replied to [email protected] last edited by
Give that statement to maybe not super techy investors, and that could spook them into the sell-off.
-
[email protected]replied to [email protected] last edited by
True, but training is one-off. And as you say, a factor 100x less costs with this new model. Therefore NVidia just saw 99% of their expected future demand for AI chips evaporate
It might also lead to 100x more power to train new models.
-
[email protected]replied to [email protected] last edited by
if, on a modern gaming pc, you can get breakneck speeds of 5 tokens per second, then actually inference is quite energy intensive too. 5 per second of anything is very slow
-
[email protected]replied to [email protected] last edited by
i can also run it on my old pentium from 3 decades ago. I'd have to swap 4MiB of weights in and out constantly, it will be very very slow, but it will work.
-
[email protected]replied to [email protected] last edited by
I doubt that will be the case, and I'll explain why.
As mentioned in this article,
SFT (supervised fine-tuning), a standard step in AI development, involves training models on curated datasets to teach step-by-step reasoning, often referred to as chain-of-thought (CoT). It is considered essential for improving reasoning capabilities. DeepSeek challenged this assumption by skipping SFT entirely, opting instead to rely on reinforcement learning (RL) to train the model.
This bold move forced DeepSeek-R1 to develop independent reasoning abilities, avoiding the brittleness often introduced by prescriptive datasets.This totally changes the way we think about AI training, which is why while OpenAI spent $100m on training GPT-4, running an expected 500,000 GPUs, DeepSeek used about 50,000, and likely spent that same roughly 10% of the cost.
So while operation, and even training, is now cheaper, it's also substantially less compute intensive to train models.
And not only is there less data than ever to train models on that won't cause them to get worse by regurgitating other worse quality AI-generated content, but even if additional datasets were scrapped entirely in favor of this new RL method, there's a point at which an LLM is simply good enough.
If you need to auto generate a corpo-speak email, you can already do that without many issues. Reformat notes or user input? Already possible. Classify tickets by type? Done. Write a silly poem? That's been possible since pre-ChatGPT. Summarize a webpage? The newest version of ChatGPT will probably do just as well as the last at that.
At a certain point, spending millions of dollars for a 1% performance improvement doesn't make sense when the existing model just already does what you need it to do.
I'm sure we'll see development, but I doubt we'll see a massive increase in training just because the cost to run and train the model has gone down.
-
[email protected]replied to [email protected] last edited by
Maybe? Depends on what costs dominate operations. I imagine Chinese electricity is cheap but building new data centres is likely much cheaper % wise than countries like the US.
-
[email protected]replied to [email protected] last edited by
Thank you. Sounds like good news.