Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. Very large amounts of gaming gpus vs AI gpus

Very large amounts of gaming gpus vs AI gpus

Scheduled Pinned Locked Moved LocalLLaMA
localllama
8 Posts 6 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T This user is from outside of this forum
    T This user is from outside of this forum
    [email protected]
    wrote last edited by
    #1

    cross-posted from: https://ani.social/post/16779655

    GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
    NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
    NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
    NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
    AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
    AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
    AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

    This post is part "hear me out" and part asking for advice.

    Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

    so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

    H B D Z 4 Replies Last reply
    6
    • T [email protected]

      cross-posted from: https://ani.social/post/16779655

      GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
      NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
      NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
      NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
      AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
      AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
      AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

      This post is part "hear me out" and part asking for advice.

      Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

      so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

      H This user is from outside of this forum
      H This user is from outside of this forum
      [email protected]
      wrote last edited by [email protected]
      #2

      Well, I wouldn't call them a "scam". They're meant for a different use-case. In a datacenter, you also have to pay for rack space and all the servers which accomodate all the GPUs. And you can now pay for 32 times as many servers with Radeon 9060XT or you buy H200 cards. Sure, you'll pay 3x as much for the cards itself. But you'll save on the amount of servers and everything that comes with it, hardware cost, space, electricity, air-con, maintenance... Less interconnect makes everything way faster...

      Of course at home different rules apply. And it depends a bit how many cards you want to run, what kind of workload you have... If you're fine with AMD or you need Cuda...

      T 1 Reply Last reply
      4
      • H [email protected]

        Well, I wouldn't call them a "scam". They're meant for a different use-case. In a datacenter, you also have to pay for rack space and all the servers which accomodate all the GPUs. And you can now pay for 32 times as many servers with Radeon 9060XT or you buy H200 cards. Sure, you'll pay 3x as much for the cards itself. But you'll save on the amount of servers and everything that comes with it, hardware cost, space, electricity, air-con, maintenance... Less interconnect makes everything way faster...

        Of course at home different rules apply. And it depends a bit how many cards you want to run, what kind of workload you have... If you're fine with AMD or you need Cuda...

        T This user is from outside of this forum
        T This user is from outside of this forum
        [email protected]
        wrote last edited by
        #3

        Yeah i should have specified for at home when saying its a scam, i honestly doubt the companies that are buying thousands of B200s for datacenters are even looking at their pricetags lmao.

        Anyway the end goal is to run something like Qwen3-235B at fp8, with some very rough napkin math 300GB vram with the cheapest option the 9060XT comes down at €7126 with 18 cards, which is very affordable. But ofcourse that this is theoretically possible does not mean it will actually work in practice, which is what im curious about.

        The inference engine im using vLLM supports ROCm so CUDA should not be strictly required.

        H 1 Reply Last reply
        2
        • T [email protected]

          Yeah i should have specified for at home when saying its a scam, i honestly doubt the companies that are buying thousands of B200s for datacenters are even looking at their pricetags lmao.

          Anyway the end goal is to run something like Qwen3-235B at fp8, with some very rough napkin math 300GB vram with the cheapest option the 9060XT comes down at €7126 with 18 cards, which is very affordable. But ofcourse that this is theoretically possible does not mean it will actually work in practice, which is what im curious about.

          The inference engine im using vLLM supports ROCm so CUDA should not be strictly required.

          H This user is from outside of this forum
          H This user is from outside of this forum
          [email protected]
          wrote last edited by [email protected]
          #4

          I think there are some posts out there (on the internet / Reddit / ...) with people building crazy rigs with old 3090s or something. I don't have any experience with that. If I were to run such a large model, I'd use a quantized version and rent a cloud server for that.

          And I don't think computers can fit infinitely many GPUs. I don't know the number, let's say it's 4. So you need to buy 5 computers to fit your 18 cards. So add a few thousand dollars. And a fast network/interconnect between them.

          I can't make any statement for performance. I'd imagine such a scenario might work for MoE models with appropriate design. And for the rest performance is abysmal. But that's only my speculation. We'd need to find people who did this.

          Edit: Alternatively, buy a Apple Mac Studio with 512GB of unified RAM. They're fast as well (probably way faster than your idea?) and maybe cheaper. Seems an M3 Ultra Mac Studio with 512GB costs around $10,000. With half that amount, it's only $7,100.

          W 1 Reply Last reply
          2
          • T [email protected]

            cross-posted from: https://ani.social/post/16779655

            GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
            NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
            NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
            NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
            AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
            AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
            AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

            This post is part "hear me out" and part asking for advice.

            Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

            so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

            B This user is from outside of this forum
            B This user is from outside of this forum
            [email protected]
            wrote last edited by
            #5

            They are a scam.

            But:

            • Using gaming GPUs in datacenters is in breach of Nvidia's license for the GPU.

            ...So, yes, they are a scam. But:

            • Datacenter is all about batched LLM performance, as the vram pools are bigger than models. In reality, one can get better parallel token/s on an H100 than you can on 2x RTX Pros or a few 5090s, especially with bigger models that take advantage of NVLink.
            1 Reply Last reply
            3
            • H [email protected]

              I think there are some posts out there (on the internet / Reddit / ...) with people building crazy rigs with old 3090s or something. I don't have any experience with that. If I were to run such a large model, I'd use a quantized version and rent a cloud server for that.

              And I don't think computers can fit infinitely many GPUs. I don't know the number, let's say it's 4. So you need to buy 5 computers to fit your 18 cards. So add a few thousand dollars. And a fast network/interconnect between them.

              I can't make any statement for performance. I'd imagine such a scenario might work for MoE models with appropriate design. And for the rest performance is abysmal. But that's only my speculation. We'd need to find people who did this.

              Edit: Alternatively, buy a Apple Mac Studio with 512GB of unified RAM. They're fast as well (probably way faster than your idea?) and maybe cheaper. Seems an M3 Ultra Mac Studio with 512GB costs around $10,000. With half that amount, it's only $7,100.

              W This user is from outside of this forum
              W This user is from outside of this forum
              [email protected]
              wrote last edited by
              #6

              Theres also the upcomming Framework desktops with 128GB of unified ram for ~$2500

              1 Reply Last reply
              1
              • T [email protected]

                cross-posted from: https://ani.social/post/16779655

                GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
                NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
                NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
                NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
                AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
                AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
                AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

                This post is part "hear me out" and part asking for advice.

                Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

                so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

                D This user is from outside of this forum
                D This user is from outside of this forum
                [email protected]
                wrote last edited by
                #7

                I figured most datacenter customers wouldn't be running pcie cards, they'd be running OAM for higher density.

                1 Reply Last reply
                0
                • T [email protected]

                  cross-posted from: https://ani.social/post/16779655

                  GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
                  NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
                  NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
                  NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
                  AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
                  AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
                  AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

                  This post is part "hear me out" and part asking for advice.

                  Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

                  so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

                  Z This user is from outside of this forum
                  Z This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #8

                  I spent chunks of 2023 and 2024 investigating and testing image gen models after a cryptobro coworker kept talking about it.

                  I rigged up an old system and ran it locally to see wtf these things are doing. Honestly producing slop at 5 seconds per image v 5 minutes is meaningless in terms of value if 0% of the slop can be salvaged. And still, a human has to figure out what to so with the best candidates.

                  In fact at a certain speed it begins to work against itself as no one can realistically analyze AI gen output as fast as it is produced.

                  Conclusion: AI is mostly worthless. It just forces you to accept that human effort is the only thing with intrinsic value. And it's as tough to get out of AI as it is to put any in.

                  And that's looking past all the other gargantuan problems with AI models.

                  1 Reply Last reply
                  0
                  Reply
                  • Reply as topic
                  Log in to reply
                  • Oldest to Newest
                  • Newest to Oldest
                  • Most Votes


                  • Login

                  • Login or register to search.
                  • First post
                    Last post
                  0
                  • Categories
                  • Recent
                  • Tags
                  • Popular
                  • World
                  • Users
                  • Groups