Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. zai-org/GLM-4.5-Air · Hugging Face

zai-org/GLM-4.5-Air · Hugging Face

Scheduled Pinned Locked Moved LocalLLaMA
localllama
20 Posts 5 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D [email protected]

    I'm currently using ollama to serve llms, what's everyone using for these models?

    I'm also using open webui as well and ollama seemed the easiest (at the time) to use in conjunction with that

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote last edited by [email protected]
    #3

    ik_llama.cpp (and its API server) is the end-all stop for these big MoE models. Level1tech just did a video on it, and check out ubergarm's quants on huggingface: https://huggingface.co/ubergarm

    TabbyAPI (exllamav3 underneath) is great for dense models, or MoEs that will just barely squeeze onto your GPU at 3bpw. Look for exl3s: https://huggingface.co/models?sort=modified&search=exl3

    Both are massively more efficient than ollama defaults, to the extent you can run models at least twice the equivalent parameter count ollama can, and support more features too. ik_llama.cpp is also how folks are running these 300B+ MoEs on a single 3090/4090 (in conjunction with a Threadripper, Xeon or EPYC, usually).

    D 1 Reply Last reply
    4
    • D [email protected]

      I'm currently using ollama to serve llms, what's everyone using for these models?

      I'm also using open webui as well and ollama seemed the easiest (at the time) to use in conjunction with that

      swelter_spark@reddthat.comS This user is from outside of this forum
      swelter_spark@reddthat.comS This user is from outside of this forum
      [email protected]
      wrote last edited by
      #4

      I use Kobold most of the time.

      1 Reply Last reply
      2
      • D [email protected]

        I'm currently using ollama to serve llms, what's everyone using for these models?

        I'm also using open webui as well and ollama seemed the easiest (at the time) to use in conjunction with that

        S This user is from outside of this forum
        S This user is from outside of this forum
        [email protected]
        wrote last edited by
        #5

        I've moved to using Rama Lama mainly because it promises to do the probing to get the best acceleration possible for whatever model you launch.

        B 1 Reply Last reply
        4
        • S [email protected]

          I've moved to using Rama Lama mainly because it promises to do the probing to get the best acceleration possible for whatever model you launch.

          B This user is from outside of this forum
          B This user is from outside of this forum
          [email protected]
          wrote last edited by [email protected]
          #6

          It looks like it just chooses a llama.cpp backend to compile, so technically you are leaving a good bit of performance/model size on the table if you know your GPU, and the backend to choose.

          All this stuff is horribly documented though.

          1 Reply Last reply
          3
          • B [email protected]

            ik_llama.cpp (and its API server) is the end-all stop for these big MoE models. Level1tech just did a video on it, and check out ubergarm's quants on huggingface: https://huggingface.co/ubergarm

            TabbyAPI (exllamav3 underneath) is great for dense models, or MoEs that will just barely squeeze onto your GPU at 3bpw. Look for exl3s: https://huggingface.co/models?sort=modified&search=exl3

            Both are massively more efficient than ollama defaults, to the extent you can run models at least twice the equivalent parameter count ollama can, and support more features too. ik_llama.cpp is also how folks are running these 300B+ MoEs on a single 3090/4090 (in conjunction with a Threadripper, Xeon or EPYC, usually).

            D This user is from outside of this forum
            D This user is from outside of this forum
            [email protected]
            wrote last edited by
            #7

            Thanks, will check that out!

            D 1 Reply Last reply
            0
            • D [email protected]

              Thanks, will check that out!

              D This user is from outside of this forum
              D This user is from outside of this forum
              [email protected]
              wrote last edited by
              #8

              Just read the L1 post and I'm just now realizing this is mainly for running quants which I generally avoid

              I guess I could spin it up just to mess around with it but probably wouldn't replace my main model

              B 1 Reply Last reply
              0
              • D [email protected]

                Just read the L1 post and I'm just now realizing this is mainly for running quants which I generally avoid

                I guess I could spin it up just to mess around with it but probably wouldn't replace my main model

                B This user is from outside of this forum
                B This user is from outside of this forum
                [email protected]
                wrote last edited by [email protected]
                #9

                Just read the L1 post and I’m just now realizing this is mainly for running quants which I generally avoid

                ik_llama.cpp supports special quantization formats incompatible with mainline llama.cpp. You can get better performance out of them than regular GGUFs.

                That being said... are you implying you run LLMs in FP16? If you're on a huge GPU (or running a small model fast), you should be running sglang or vllm instead, not llama.cpp (which is basically designed for quantization and non-enterprise hardware), especially if you are making parallel calls.

                D 1 Reply Last reply
                1
                • B [email protected]

                  Just read the L1 post and I’m just now realizing this is mainly for running quants which I generally avoid

                  ik_llama.cpp supports special quantization formats incompatible with mainline llama.cpp. You can get better performance out of them than regular GGUFs.

                  That being said... are you implying you run LLMs in FP16? If you're on a huge GPU (or running a small model fast), you should be running sglang or vllm instead, not llama.cpp (which is basically designed for quantization and non-enterprise hardware), especially if you are making parallel calls.

                  D This user is from outside of this forum
                  D This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #10

                  yeah, im currently running the gemma 27b model locally
                  I recently took a look at vllm but the only reason i didnt want to switch is because it doesnt have automatic offloading (seems that it's a manual thing right now)

                  B 1 Reply Last reply
                  0
                  • D [email protected]

                    yeah, im currently running the gemma 27b model locally
                    I recently took a look at vllm but the only reason i didnt want to switch is because it doesnt have automatic offloading (seems that it's a manual thing right now)

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote last edited by [email protected]
                    #11

                    Gemma3 in particular has basically no reason to run unquantized since Google did a QAT (quantization aware training) finetune of it. The Q4_0 is almost, objectively, indistinguishable from the BF16 weights. Llama.cpp also handles its SWA (sliding window attention) well (whereas last I checked vllm does not).

                    vllm does not support CPU offloading well like llama.cpp does.

                    ...Are you running FP16 models offloaded?

                    D 1 Reply Last reply
                    1
                    • B [email protected]

                      Gemma3 in particular has basically no reason to run unquantized since Google did a QAT (quantization aware training) finetune of it. The Q4_0 is almost, objectively, indistinguishable from the BF16 weights. Llama.cpp also handles its SWA (sliding window attention) well (whereas last I checked vllm does not).

                      vllm does not support CPU offloading well like llama.cpp does.

                      ...Are you running FP16 models offloaded?

                      D This user is from outside of this forum
                      D This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #12

                      omg, I'm retarded. Your comment made me start thinking about things and...I've been using q4 without knowing it... I assumed ollama ran the fp16 by default 😬

                      about vllm, yeah I see that you have to specify how much to offload manually which I wasn't a fan of. I have 4x 3090 in an ML server at the moment but I'm using those for all AI workloads so the VRAM is shared for TTS/STT/LLM/Image Gen

                      thats basically why I kind of really want auto offload

                      B 1 Reply Last reply
                      0
                      • D [email protected]

                        omg, I'm retarded. Your comment made me start thinking about things and...I've been using q4 without knowing it... I assumed ollama ran the fp16 by default 😬

                        about vllm, yeah I see that you have to specify how much to offload manually which I wasn't a fan of. I have 4x 3090 in an ML server at the moment but I'm using those for all AI workloads so the VRAM is shared for TTS/STT/LLM/Image Gen

                        thats basically why I kind of really want auto offload

                        B This user is from outside of this forum
                        B This user is from outside of this forum
                        [email protected]
                        wrote last edited by [email protected]
                        #13

                        Oh jeez. Do they have nvlink?

                        Also I don’t know if ollama even defaults to QAT heh. It has a lot of technical issues.

                        My friend, you need to set up something different. If you want pure speed, run vllm and split Gemma evenly across 3 or four of those 3090s, and manually limit its vram to like 30% each or whatever it will take. Vllm will take advantage of nvlink for splitting and make it extremely fast. Use an AWQ made from Gemma QAT.

                        But you can also run much better models than that. I tend to run Nemotron 49B on a single 3090 via TabbyAPI, for instance, but you can run huge models with tons of room to spare for other ML stuff. You could probably run a huge MoE like Deepseek on that rig, depending on its RAM capacity. Some frameworks like TabbyAPI can hot swap them too, as different models have different strengths.

                        D 1 Reply Last reply
                        0
                        • B [email protected]

                          Oh jeez. Do they have nvlink?

                          Also I don’t know if ollama even defaults to QAT heh. It has a lot of technical issues.

                          My friend, you need to set up something different. If you want pure speed, run vllm and split Gemma evenly across 3 or four of those 3090s, and manually limit its vram to like 30% each or whatever it will take. Vllm will take advantage of nvlink for splitting and make it extremely fast. Use an AWQ made from Gemma QAT.

                          But you can also run much better models than that. I tend to run Nemotron 49B on a single 3090 via TabbyAPI, for instance, but you can run huge models with tons of room to spare for other ML stuff. You could probably run a huge MoE like Deepseek on that rig, depending on its RAM capacity. Some frameworks like TabbyAPI can hot swap them too, as different models have different strengths.

                          D This user is from outside of this forum
                          D This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #14

                          Unfortunately i didn't set up nvlink, but ollama auto splits things for models which require it

                          I really just a "set and forget" model server lol (that's why I keep mentioning the auto offload)

                          Ollama integrates nicely with OWUI

                          B 1 Reply Last reply
                          0
                          • D [email protected]

                            Unfortunately i didn't set up nvlink, but ollama auto splits things for models which require it

                            I really just a "set and forget" model server lol (that's why I keep mentioning the auto offload)

                            Ollama integrates nicely with OWUI

                            B This user is from outside of this forum
                            B This user is from outside of this forum
                            [email protected]
                            wrote last edited by [email protected]
                            #15

                            Basically any backend will integrate with open web ui since they all support OpenAI API. In fact, some will support more sampling, embeddings models for RAG and such.

                            They will mostly all do auto split too. TabbyAPI (which I would specifically recommend if you don’t have nvlink set up) is completely automated splitting across GPUs, vllm is completely automated. IK_llama.cpp (for the absolute biggest models) needs a more specific launch command, but there are good guides for it.

                            IMO it’s worth it to drill down some making one “set it and forget it” config, as these 200B MoEs that keep coming out will blow ollama Gemma 27B away, depending on what you use it for.

                            D 1 Reply Last reply
                            1
                            • B [email protected]

                              Basically any backend will integrate with open web ui since they all support OpenAI API. In fact, some will support more sampling, embeddings models for RAG and such.

                              They will mostly all do auto split too. TabbyAPI (which I would specifically recommend if you don’t have nvlink set up) is completely automated splitting across GPUs, vllm is completely automated. IK_llama.cpp (for the absolute biggest models) needs a more specific launch command, but there are good guides for it.

                              IMO it’s worth it to drill down some making one “set it and forget it” config, as these 200B MoEs that keep coming out will blow ollama Gemma 27B away, depending on what you use it for.

                              D This user is from outside of this forum
                              D This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #16

                              I'll take a look at both tabby and vllm tomorrow

                              Hopefully there's cpu offload in the works so I can test those crazy models without too much fiddling in the future (server also has 128gb of ram)

                              B 2 Replies Last reply
                              0
                              • D [email protected]

                                I'll take a look at both tabby and vllm tomorrow

                                Hopefully there's cpu offload in the works so I can test those crazy models without too much fiddling in the future (server also has 128gb of ram)

                                B This user is from outside of this forum
                                B This user is from outside of this forum
                                [email protected]
                                wrote last edited by [email protected]
                                #17

                                If you want CPU offload, IK_llama.cpp is explicitly designed for that and your go-to. It keeps the “dense” part of the model on the GPUs and offloads the lightweight MoE bits to CPU

                                Vllm and exllama are GPU only. Vllm's niche is that it’s very fast with short context parallel calls (aka for serving dozens of users at once with small models), while exllama uses SOTA quantization for squeezing large models onto GPUs with minimal loss.

                                D 1 Reply Last reply
                                1
                                • D [email protected]

                                  I'll take a look at both tabby and vllm tomorrow

                                  Hopefully there's cpu offload in the works so I can test those crazy models without too much fiddling in the future (server also has 128gb of ram)

                                  B This user is from outside of this forum
                                  B This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #18

                                  Oh, and I forgot, check out ubergarm's quants:

                                  https://huggingface.co/ubergarm

                                  They’re explicitly designed for RAM offloading.

                                  1 Reply Last reply
                                  0
                                  • B [email protected]

                                    If you want CPU offload, IK_llama.cpp is explicitly designed for that and your go-to. It keeps the “dense” part of the model on the GPUs and offloads the lightweight MoE bits to CPU

                                    Vllm and exllama are GPU only. Vllm's niche is that it’s very fast with short context parallel calls (aka for serving dozens of users at once with small models), while exllama uses SOTA quantization for squeezing large models onto GPUs with minimal loss.

                                    D This user is from outside of this forum
                                    D This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #19

                                    IK sounds promising!
                                    Will check it out to see if it can run in a container

                                    D 1 Reply Last reply
                                    1
                                    • D [email protected]

                                      IK sounds promising!
                                      Will check it out to see if it can run in a container

                                      D This user is from outside of this forum
                                      D This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #20

                                      I'm just gonna try vllm, seems like ik_llama.cpp doesnt have a quick docker method

                                      1 Reply Last reply
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups