Cancelling new data centers because deep seek has shown a more efficient path isn't proof that AI is dead as the author claims.
-
If a new driver came out that gave Nvidia 5090 performance to games with gtx1080 equivalent hardware would you still buy a new video card this year?
It doesn't make any sense to compare games and AI. Games have a well-defined upper bound for performance. Even Crysis has "maximum settings" that you can't go above. Supposedly, this doesn't hold true for AI, scaling it should continually improve it.
So: yes, in your analogy, MS would still buy a new video card this year if they believed in the progress being possible and reasonably likely.
Like games have diminished returns on better graphics (it's already photo realistic few pay $2k on a GPU for more hairs?), AI has a plateau where it gives good enough answers that people will pay for the service.
If people are paying you money and the next level of performance is not appreciated by the general consumer, why spend billions that will take longer to recoup?
And again data centers aren't just used for AI.
-
Fiber buildouts were cancelled back in 2000 because multimode made existing fiber more efficient.
Sorry but that makes no sense in multiple ways.
-
First of all single mode fiber provides magnitudes higher capacity than multi mode.
-
Secondly the modal patterns depend on the physics of the cable, specifically its core diameter. Single mode fibers has a 9 micrometer core, multi mode 50 or 62.5 micrometers. So you can't change the light modes on existing fiber.
-
Thirdly multi mode fiber existed first, so it couldn't be the improvement. And single mode fiber was becoming the way forward for long distance transmission in 1982 already, and the first transatlantic cable with it was laid in 1988. So it couldn't be the improvement of 2000 either.
You must mean something else entirely.
Sorry I meant wdm. Multimode was for home installs.
-
-
Like games have diminished returns on better graphics (it's already photo realistic few pay $2k on a GPU for more hairs?), AI has a plateau where it gives good enough answers that people will pay for the service.
If people are paying you money and the next level of performance is not appreciated by the general consumer, why spend billions that will take longer to recoup?
And again data centers aren't just used for AI.
It's still not a valid comparison. We're not talking about diminished returns, we're talking about an actual ceiling. There are only so many options implemented in games - once they're maxed out, you can't go higher.
That's not the situation we have with AI, it's supposed to scale indefinitely.
-
It's still not a valid comparison. We're not talking about diminished returns, we're talking about an actual ceiling. There are only so many options implemented in games - once they're maxed out, you can't go higher.
That's not the situation we have with AI, it's supposed to scale indefinitely.
Current games have a limit. Current models have a limit. New games could scale until people don't see a quality improvement. New models can scale until people don't see a quality improvement.
-
yeah, genai as a technology and field of study may not disappear. genai as an overinflated product marketed as the be all end all that would solve all of humanity's problems may. the bubble can't burst soon enough
Historically, the field of AI research has gone through boom and bust cycles. The first boom was during the Vietnam War with DARPA dumping money into it. As opposition to the Vietnam War grew, DARPA funding dried up, and the field went into hibernation with only minor advancement for decades. Then the tech giant monopolies saw an opportunity for a new bubble.
It'd be nice if it could be funded at a more steady, sustainable level, but apparently capitalism can't do that.
-
Current games have a limit. Current models have a limit. New games could scale until people don't see a quality improvement. New models can scale until people don't see a quality improvement.
Games always have a limit, AI is supposed to get better with scale. Which part do you not understand?
I'm supposed to be able to take a model from today and scale it up 100x and get an improvement. I can't make the settings in Crysis 100x higher than they can go.
-
Games always have a limit, AI is supposed to get better with scale. Which part do you not understand?
I'm supposed to be able to take a model from today and scale it up 100x and get an improvement. I can't make the settings in Crysis 100x higher than they can go.
I'm supposed to be able to take a model architecture from today, scale it up 100x and get an improvement.
You can make Crysis run at higher fps. You can add polygons. (remember ati clown feet?) You can add details to textures. https://research.nvidia.com/publication/2016-06_infinite-resolution-textures
But really the "game" is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
Which part of diminished returns not offering as much profit did you not understand?
Current models give MS an extra 30% revenue. If they spend billions on a new model will customer pay even more? How much would you pay more for a marginally better AI?
-
I'm supposed to be able to take a model architecture from today, scale it up 100x and get an improvement.
You can make Crysis run at higher fps. You can add polygons. (remember ati clown feet?) You can add details to textures. https://research.nvidia.com/publication/2016-06_infinite-resolution-textures
But really the "game" is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
Which part of diminished returns not offering as much profit did you not understand?
Current models give MS an extra 30% revenue. If they spend billions on a new model will customer pay even more? How much would you pay more for a marginally better AI?
But really the "game" is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
No, it's not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn't get a new level because you're increasing the resolution.
At this point, you either have a fundamental misunderstanding of AI models, or you're trolling.
-
But really the "game" is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
No, it's not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn't get a new level because you're increasing the resolution.
At this point, you either have a fundamental misunderstanding of AI models, or you're trolling.
When you throw more hardware at them, they are supposed to develop new abilities.
I don't think you understand how it works at all.
Data is collected. Training is done on the data. Training is done on the trained data (deep seek). You now how a model. That model is a static software program. It requires 700 GB of ram to run (deep seek). Throwing more hardware at the model does nothing but give you a quicker response.
If everyone pays you to use your model, you have no reason to develop a new one. Like Skyrim.
-
When you throw more hardware at them, they are supposed to develop new abilities.
I don't think you understand how it works at all.
Data is collected. Training is done on the data. Training is done on the trained data (deep seek). You now how a model. That model is a static software program. It requires 700 GB of ram to run (deep seek). Throwing more hardware at the model does nothing but give you a quicker response.
If everyone pays you to use your model, you have no reason to develop a new one. Like Skyrim.
My god.
There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn't any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.
Now the promise from OpenAI (from their many papers, and press releases, and ...) was that we'll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.
Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they'd keep the GPUs for themselves to widen their moat.
But they're not doing that, instead they're scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don't forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they'll have to scale up massively to keep any advantage compared to the market. But they're not doing that.
-
My god.
There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn't any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.
Now the promise from OpenAI (from their many papers, and press releases, and ...) was that we'll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.
Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they'd keep the GPUs for themselves to widen their moat.
But they're not doing that, instead they're scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don't forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they'll have to scale up massively to keep any advantage compared to the market. But they're not doing that.
"training a new model"
Is equivalent to "make a new game" with better graphics.
I've already explained that analogy several times.
If people pay you for the existing model you have no reason to immediately train a better one.
-
Cancelling new data centers because deep seek has shown a more efficient path isn't proof that AI is dead as the author claims.
Fiber buildouts were cancelled back in 2000 because multimode made existing fiber more efficient. The Internet investment bubble popped. That didn't mean the Internet was dead.
This is a good point. It’s never sat right with me that LLMs require such overwhelming resources and cannot be optimized. It’s possible that innovation has been too fast to worry about optimization yet, but all this BS about building new power plants and chip foundries for trillions of dollars and whatnot just seems mad.
-
yeah, genai as a technology and field of study may not disappear. genai as an overinflated product marketed as the be all end all that would solve all of humanity's problems may. the bubble can't burst soon enough
Sometimes the hype bubble bursts and then the products eventually grows to be even larger than the hype. But you never know how connected hype actually is to reality. Hype can pop like a cherry tree on the first sunny day of spring, thinking summer has arrived, but then get drenched by another few weeks of rain. And as stupid as that cherry tree is, summer will eventually arrive.
-