Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Cancelling new data centers because deep seek has shown a more efficient path isn't proof that AI is dead as the author claims.

Cancelling new data centers because deep seek has shown a more efficient path isn't proof that AI is dead as the author claims.

Scheduled Pinned Locked Moved Technology
25 Posts 11 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F [email protected]

    But really the "game" is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.

    No, it's not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn't get a new level because you're increasing the resolution.

    At this point, you either have a fundamental misunderstanding of AI models, or you're trolling.

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #21

    When you throw more hardware at them, they are supposed to develop new abilities.

    I don't think you understand how it works at all.

    Data is collected. Training is done on the data. Training is done on the trained data (deep seek). You now how a model. That model is a static software program. It requires 700 GB of ram to run (deep seek). Throwing more hardware at the model does nothing but give you a quicker response.

    If everyone pays you to use your model, you have no reason to develop a new one. Like Skyrim.

    F 1 Reply Last reply
    0
    • B [email protected]

      When you throw more hardware at them, they are supposed to develop new abilities.

      I don't think you understand how it works at all.

      Data is collected. Training is done on the data. Training is done on the trained data (deep seek). You now how a model. That model is a static software program. It requires 700 GB of ram to run (deep seek). Throwing more hardware at the model does nothing but give you a quicker response.

      If everyone pays you to use your model, you have no reason to develop a new one. Like Skyrim.

      F This user is from outside of this forum
      F This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #22

      My god.

      There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn't any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.

      Now the promise from OpenAI (from their many papers, and press releases, and ...) was that we'll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.

      Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they'd keep the GPUs for themselves to widen their moat.

      But they're not doing that, instead they're scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don't forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they'll have to scale up massively to keep any advantage compared to the market. But they're not doing that.

      B 1 Reply Last reply
      0
      • F [email protected]

        My god.

        There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn't any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.

        Now the promise from OpenAI (from their many papers, and press releases, and ...) was that we'll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.

        Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they'd keep the GPUs for themselves to widen their moat.

        But they're not doing that, instead they're scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don't forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they'll have to scale up massively to keep any advantage compared to the market. But they're not doing that.

        B This user is from outside of this forum
        B This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #23

        "training a new model"

        Is equivalent to "make a new game" with better graphics.

        I've already explained that analogy several times.

        If people pay you for the existing model you have no reason to immediately train a better one.

        1 Reply Last reply
        0
        • B [email protected]

          Cancelling new data centers because deep seek has shown a more efficient path isn't proof that AI is dead as the author claims.

          Fiber buildouts were cancelled back in 2000 because multimode made existing fiber more efficient. The Internet investment bubble popped. That didn't mean the Internet was dead.

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #24

          This is a good point. It’s never sat right with me that LLMs require such overwhelming resources and cannot be optimized. It’s possible that innovation has been too fast to worry about optimization yet, but all this BS about building new power plants and chip foundries for trillions of dollars and whatnot just seems mad.

          1 Reply Last reply
          0
          • ? Guest

            yeah, genai as a technology and field of study may not disappear. genai as an overinflated product marketed as the be all end all that would solve all of humanity's problems may. the bubble can't burst soon enough

            S This user is from outside of this forum
            S This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #25

            Sometimes the hype bubble bursts and then the products eventually grows to be even larger than the hype. But you never know how connected hype actually is to reality. Hype can pop like a cherry tree on the first sunny day of spring, thinking summer has arrived, but then get drenched by another few weeks of rain. And as stupid as that cherry tree is, summer will eventually arrive.

            1 Reply Last reply
            0
            • System shared this topic on
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups