Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. Niche Model of the Day: Nemotron 49B 3bpw exl3

Niche Model of the Day: Nemotron 49B 3bpw exl3

Scheduled Pinned Locked Moved LocalLLaMA
localllama
6 Posts 4 Posters 40 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1

    This is one of the "smartest" models you can fit on a 24GB GPU now, with no offloading and very little quantization loss. It feels big and insightful, like a better (albeit dry) Llama 3.3 70B with thinking, and with more STEM world knowledge than QwQ 32B, but comfortably fits thanks the new exl3 quantization!

    Quantization Loss

    You need to use a backend that support exl3, like (at the moment) text-gen-web-ui or (soon) TabbyAPI.

    1 Reply Last reply
    1
    23
    • System shared this topic on
    • projectmoonP Offline
      projectmoonP Offline
      projectmoon
      wrote on last edited by
      #2

      What are the benefits of EXL3 vs the more normal quantizations? I have 16gb of VRAM on an AMD card. Would I be able to benefit from this quant yet?

      massive_bereavement@fedia.ioM B 2 Replies Last reply
      3
      • projectmoonP projectmoon

        What are the benefits of EXL3 vs the more normal quantizations? I have 16gb of VRAM on an AMD card. Would I be able to benefit from this quant yet?

        massive_bereavement@fedia.ioM This user is from outside of this forum
        massive_bereavement@fedia.ioM This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        AFAIK ROCm isn't yet supported:
        https://github.com/turboderp-org/exllamav3

        I hope the word "yet" means that it might come at some point, but for now it doesn't seem to be developed in any form or fashion.

        1 Reply Last reply
        2
        • projectmoonP projectmoon

          What are the benefits of EXL3 vs the more normal quantizations? I have 16gb of VRAM on an AMD card. Would I be able to benefit from this quant yet?

          B This user is from outside of this forum
          B This user is from outside of this forum
          [email protected]
          wrote on last edited by [email protected]
          #4

          ^ what was said, not supported yet, though you can give it a shot theoretically.

          Basically exl3 means you can run 32B models, totally on GPU without a ton of quantization loss, if you can get it working on your computer. But exl2/exl3 is less popular largely because it’s PyTorch based, hence more finicky to setup (no GGUF single files, no Macs, no easy install, especially on AMD).

          1 Reply Last reply
          0
          • fisch@discuss.tchncs.deF This user is from outside of this forum
            fisch@discuss.tchncs.deF This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            There's a "What's missing" section there that lists ROCm, so I'm pretty sure it's planned to be added

            B 1 Reply Last reply
            0
            • fisch@discuss.tchncs.deF [email protected]

              There's a "What's missing" section there that lists ROCm, so I'm pretty sure it's planned to be added

              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #6

              That, and exl2 has ROCm support.

              There was always the bugaboo of uttering a prayer to get rocm flash attention working (come on, AMD...), but exl3 has plans to switch to flashinfer, which should eliminate that issue.

              1 Reply Last reply
              1
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups