Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. need help understanding if this setup is even feasible.

need help understanding if this setup is even feasible.

Scheduled Pinned Locked Moved LocalLLaMA
localllama
21 Posts 5 Posters 3 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B [email protected]

    I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl.

    Then don't offload! Since its 3000 series, you can run an exl3 with a really tight quant.

    For instance, Mistral 24B will fit in 12GB with no offloading at 3bpw, somewhere in the quality ballpark of an Q4 GGUF: https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/tfIK6GfNdH1830vwfX6o7.png

    It's especially good for long context, since exllama's KV cache quantization is so good.

    You can still use kobold.cpp, but you'll have to host it via an external endpoint like TabbyAPI. Or you can use croco.cpp (a fork of kobold.cpp) with your own ik_llama.cpp trellis-quantized GGUF (though you'll have to make that yourself since they aren't common... it's complicated, heh).

    Point being that simply having an ampere (3000 series RTX) card can increase efficiency massively over a baseline GGUF.

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote last edited by [email protected]
    #10

    I'll have to check exllama once i build the system, if it can fit a 24B model in 12 gb. It should give me some leeway for 13B ones. Though i feel like i'll need to quantize to exl3 myself for the models i use. Worth a try on a Container though.

    Thanks for the tip.

    B 1 Reply Last reply
    1
    • B [email protected]

      I'll have to check exllama once i build the system, if it can fit a 24B model in 12 gb. It should give me some leeway for 13B ones. Though i feel like i'll need to quantize to exl3 myself for the models i use. Worth a try on a Container though.

      Thanks for the tip.

      B This user is from outside of this forum
      B This user is from outside of this forum
      [email protected]
      wrote last edited by [email protected]
      #11

      You can definitely quantize exl3s yourself; the process is vram light (albeit time intense).

      What 13B are you using? FYI the old Llama2 13B models don’t use GQA, so even their relatively short 4096 context takes up a lot of vram. Newer 12Bs and 14Bs are much more efficient (and much smarter TBH).

      B 1 Reply Last reply
      1
      • B [email protected]

        I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.

        My idea was to get a 3060, a pci riser and 500w power supply just for the gpu.
        Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it's an sff machine.

        What's making me weary of going through is the specs of the 7010 itself: it's a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)

        Do you think it's even worth going through?

        Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don't know if it will be good enough.

        P This user is from outside of this forum
        P This user is from outside of this forum
        [email protected]
        wrote last edited by
        #12

        Would you be willing to talk about what you're intending to do with this at all? No hard feelings if you'd rather not for any reason.

        For context on my request - I've been following this comm for a bit and there seems like a real committed, knowledgeable base of folks here - the dialog just in this post almost brings a tear to my eye, lol.

        I work fairly adjacent to this stuff, and have a slowly growing home lab. Time is limited of course and gotta prioritize what to learn and play with - LLMs are obviously both and useful, but I haven't yet encountered a compelling use case for myself (or maybe just enough curiosity about one) to actually dive in.

        Selfishly I just wish every post here would give some info about what they're up to so I can start to fill in whatever is apparently missing in my sort of "drum up fun ideas" brain subroutine, regarding this topic. Lol.

        B 1 Reply Last reply
        1
        • B [email protected]

          You can definitely quantize exl3s yourself; the process is vram light (albeit time intense).

          What 13B are you using? FYI the old Llama2 13B models don’t use GQA, so even their relatively short 4096 context takes up a lot of vram. Newer 12Bs and 14Bs are much more efficient (and much smarter TBH).

          B This user is from outside of this forum
          B This user is from outside of this forum
          [email protected]
          wrote last edited by
          #13

          right now i'm hopping between nemo finetunes to see how they fare. i think i only ever used one 8B model from Llama2, the rest is been all Llama 3 and maybe some solar based ones. unfortunately i have yet to properly dig into the more technical side of llms due to time contraints.

          the process is vram light (albeit time intense)

          so long as it's not interactive i can always run it at night and make it shut off the rig when it's done. power here is cheaper at night anyways 🙂

          thanks for the info (and sorry for the late response, work + cramming for exams turned out to be more brutal than expected)

          B 1 Reply Last reply
          1
          • P [email protected]

            Would you be willing to talk about what you're intending to do with this at all? No hard feelings if you'd rather not for any reason.

            For context on my request - I've been following this comm for a bit and there seems like a real committed, knowledgeable base of folks here - the dialog just in this post almost brings a tear to my eye, lol.

            I work fairly adjacent to this stuff, and have a slowly growing home lab. Time is limited of course and gotta prioritize what to learn and play with - LLMs are obviously both and useful, but I haven't yet encountered a compelling use case for myself (or maybe just enough curiosity about one) to actually dive in.

            Selfishly I just wish every post here would give some info about what they're up to so I can start to fill in whatever is apparently missing in my sort of "drum up fun ideas" brain subroutine, regarding this topic. Lol.

            B This user is from outside of this forum
            B This user is from outside of this forum
            [email protected]
            wrote last edited by [email protected]
            #14

            at the moment i'm essentially lab ratting the models, i just love to see how far i can push them, both in parameters and in compexity of request. before they break down. plus it was a good excuse to expand my little "homelab" (read: workbench that's also stuffed with old computers) form just a raspberry pi to something more beefy.
            as for more "practical" (still mostly to mess around) purposes. i was thinking about making a pseudo-realistic digital radio w/ announcer, using a small model and a TTS model: that is, writing a small summary for the songs in my playlists (or maybe letting the model itself do it, if i manage to give it search capabilites), and letting them shuffle, using the LLM+TTS combo to fake an announcer introducing the songs. i'm quite sure there was already a similar project floating around on github.
            another option would be implementing it in home assistant via something like willow as a frontend. to have something closer to commercial assistants like alexa, but fully controlled by the user.

            I’ve been following this comm for a bit and there seems like a real committed, knowledgeable base of folks here - the dialog just in this post almost brings a tear to my eye, lol.

            to be honest, this post might have been the most positive interaction i've had on the web since the bbs days.
            i guess the fact the communities are smaller makes it easier to cobble up people that are genuinely interested in sharing and learing about this stuff, same with the homelab community. like comparing a local coffee shop to a starbucks, it just by nature filters for different people 🙂

            1 Reply Last reply
            1
            • B [email protected]

              right now i'm hopping between nemo finetunes to see how they fare. i think i only ever used one 8B model from Llama2, the rest is been all Llama 3 and maybe some solar based ones. unfortunately i have yet to properly dig into the more technical side of llms due to time contraints.

              the process is vram light (albeit time intense)

              so long as it's not interactive i can always run it at night and make it shut off the rig when it's done. power here is cheaper at night anyways 🙂

              thanks for the info (and sorry for the late response, work + cramming for exams turned out to be more brutal than expected)

              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote last edited by [email protected]
              #15

              Yeah it’s basically impossible to keep up with new releases, heh.

              Anyway, Gemma 12B is really popular now, and TBH much smarter than Nemo. You can grab a special “QAT” Q4_0 from Google (that works in kobold.cpp, but fits much more context with base llama.cpp) with basically the same performance as unquantized, would highly recommend that.

              I'd also highly recommend trying 24B when you get the rig! It’s so much better than Nemo, even more than the size would suggest, so it should still win out even if you have to go down to 2.9 bpw, I’d wager.

              Qwen3 30B A3B is also popular now, and would work on your 3770 and kobold.cpp with no changes (though there are speed gains to be had with the right framework, namely ik_llama.cpp)

              One other random thing, some of kobold.cpps sampling presets are very funky with new models. I’d recommend resetting everything to off, then start with like 0.4 temp, 0.04 MinP, 0.02/1024 rep penalty and 0.4 DRY, not the crazy high temp sampling they normally use, with newer models than llama2.

              I can host specific model/quantization on the kobold.cpp API to try if you want, to save tweaking time. Just ask (or PM me, as replies sometimes don’t send notifications).

              Good luck with exams! No worries about response times, /c/localllama is a slow, relaxed community.

              B 1 Reply Last reply
              1
              • B [email protected]

                Yeah it’s basically impossible to keep up with new releases, heh.

                Anyway, Gemma 12B is really popular now, and TBH much smarter than Nemo. You can grab a special “QAT” Q4_0 from Google (that works in kobold.cpp, but fits much more context with base llama.cpp) with basically the same performance as unquantized, would highly recommend that.

                I'd also highly recommend trying 24B when you get the rig! It’s so much better than Nemo, even more than the size would suggest, so it should still win out even if you have to go down to 2.9 bpw, I’d wager.

                Qwen3 30B A3B is also popular now, and would work on your 3770 and kobold.cpp with no changes (though there are speed gains to be had with the right framework, namely ik_llama.cpp)

                One other random thing, some of kobold.cpps sampling presets are very funky with new models. I’d recommend resetting everything to off, then start with like 0.4 temp, 0.04 MinP, 0.02/1024 rep penalty and 0.4 DRY, not the crazy high temp sampling they normally use, with newer models than llama2.

                I can host specific model/quantization on the kobold.cpp API to try if you want, to save tweaking time. Just ask (or PM me, as replies sometimes don’t send notifications).

                Good luck with exams! No worries about response times, /c/localllama is a slow, relaxed community.

                B This user is from outside of this forum
                B This user is from outside of this forum
                [email protected]
                wrote last edited by
                #16

                Thanks for the advice. I'll see how mutch i can squeeze out of the new rig. Especially with exl models and different frameworks.

                Gemma 12B is really popular now

                I was already eyeing it. But i remember the context being memory greedy due to being a multimodal model. While Qwen3 was just way out of the steam deck's capabilities.
                Now it's just a matter of assembling the rig and get tinkering.

                Thanks again for the time and the availability 🙂

                B 1 Reply Last reply
                1
                • B [email protected]

                  Thanks for the advice. I'll see how mutch i can squeeze out of the new rig. Especially with exl models and different frameworks.

                  Gemma 12B is really popular now

                  I was already eyeing it. But i remember the context being memory greedy due to being a multimodal model. While Qwen3 was just way out of the steam deck's capabilities.
                  Now it's just a matter of assembling the rig and get tinkering.

                  Thanks again for the time and the availability 🙂

                  B This user is from outside of this forum
                  B This user is from outside of this forum
                  [email protected]
                  wrote last edited by [email protected]
                  #17

                  But i remember the context being memory greedy due to being a multimodal

                  No, it's super efficient! I can run 27B's full 128K on my 3090, easy.

                  But you have to use the base llama.cpp server. kobold.cpp doesn't seem to support the sliding window attention (last I checked like two weeks ago), so even a small context takes up a ton there.

                  And the image input part is optional. Delete the mmproj file, and it wont load.

                  There are all sorts of engine quirks like this, heh, it really is impossible to keep up with.

                  B 1 Reply Last reply
                  1
                  • B [email protected]

                    But i remember the context being memory greedy due to being a multimodal

                    No, it's super efficient! I can run 27B's full 128K on my 3090, easy.

                    But you have to use the base llama.cpp server. kobold.cpp doesn't seem to support the sliding window attention (last I checked like two weeks ago), so even a small context takes up a ton there.

                    And the image input part is optional. Delete the mmproj file, and it wont load.

                    There are all sorts of engine quirks like this, heh, it really is impossible to keep up with.

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #18

                    Oh ok. That changes a lot of things then :-).
                    I think i'll finally have to graduate to something a little less guided than kobold.cpp.
                    Time to read llama.cpp's and exllama's docs i guess.

                    Thanks for the tips.

                    B 1 Reply Last reply
                    0
                    • B [email protected]

                      Oh ok. That changes a lot of things then :-).
                      I think i'll finally have to graduate to something a little less guided than kobold.cpp.
                      Time to read llama.cpp's and exllama's docs i guess.

                      Thanks for the tips.

                      B This user is from outside of this forum
                      B This user is from outside of this forum
                      [email protected]
                      wrote last edited by [email protected]
                      #19

                      The LLM “engine” is mostly detached from the UI.

                      kobold.cpp is actually pretty great, and you can still use it with TabbyAPI (what you run for exllama) and the llama.cpp server.

                      I personally love this for writing and testing though:

                      https://github.com/lmg-anon/mikupad

                      And Open Web UI for more general usage.

                      There’s a big backlog of poorly documented knowledge too, heh, just ask if you’re wondering how to cram a specific model in. But the “jist” of the optimal engine rules are:

                      • For MoE models (like Qwen3 30B), try ik_llama.cpp, which is a fork specifically optimized for big MoEs partially offloaded to CPU.

                      • For Gemma 3 specifically, use the regular llama.cpp server since it seems to be the only thing supporting the sliding window attention (which makes long context easy).

                      • For pretty much anything else, if it’s supported by exllamav3 and you have a 3060, it's optimal to use that (via its server, which is called TabbyAPI). And you can use its quantized cache (try Q6/5) to easily get long context.

                      B 1 Reply Last reply
                      1
                      • B [email protected]

                        The LLM “engine” is mostly detached from the UI.

                        kobold.cpp is actually pretty great, and you can still use it with TabbyAPI (what you run for exllama) and the llama.cpp server.

                        I personally love this for writing and testing though:

                        https://github.com/lmg-anon/mikupad

                        And Open Web UI for more general usage.

                        There’s a big backlog of poorly documented knowledge too, heh, just ask if you’re wondering how to cram a specific model in. But the “jist” of the optimal engine rules are:

                        • For MoE models (like Qwen3 30B), try ik_llama.cpp, which is a fork specifically optimized for big MoEs partially offloaded to CPU.

                        • For Gemma 3 specifically, use the regular llama.cpp server since it seems to be the only thing supporting the sliding window attention (which makes long context easy).

                        • For pretty much anything else, if it’s supported by exllamav3 and you have a 3060, it's optimal to use that (via its server, which is called TabbyAPI). And you can use its quantized cache (try Q6/5) to easily get long context.

                        B This user is from outside of this forum
                        B This user is from outside of this forum
                        [email protected]
                        wrote last edited by [email protected]
                        #20

                        I'll have to check mikupad. For the most part i've been using sillytavern with a generic assistant card because it looked like it would allow me plenty of space to tweak stuff. Even if it's not technically meant for the more traditional assistant use case.

                        Thanks for the cheatsheet, it wil come really handy once i manage to set everything up. Most likely i'll use podman to make a container for each engine.

                        As for the hardware side. The thinkcentre arrived today. But the card still has to arrive. Unfortunately i can't really ask more questions if i can't set it all up first to see what goes wrong / get a sense of what i Haven't understood.

                        I'll keep you guys updated with the whole case modding stuff. I think it will be pretty fun to see come along.

                        Thanks for everything.

                        B 1 Reply Last reply
                        1
                        • B [email protected]

                          I'll have to check mikupad. For the most part i've been using sillytavern with a generic assistant card because it looked like it would allow me plenty of space to tweak stuff. Even if it's not technically meant for the more traditional assistant use case.

                          Thanks for the cheatsheet, it wil come really handy once i manage to set everything up. Most likely i'll use podman to make a container for each engine.

                          As for the hardware side. The thinkcentre arrived today. But the card still has to arrive. Unfortunately i can't really ask more questions if i can't set it all up first to see what goes wrong / get a sense of what i Haven't understood.

                          I'll keep you guys updated with the whole case modding stuff. I think it will be pretty fun to see come along.

                          Thanks for everything.

                          B This user is from outside of this forum
                          B This user is from outside of this forum
                          [email protected]
                          wrote last edited by [email protected]
                          #21

                          Most likely i’ll use podman to make a container for each engine.

                          IDK about windows, but on linux I find it easier to just make a python venv for each engine. Theres less CPU/RAM(/GPU?) overhead that way anyway, and its best to pull bleeding edge git versions of engines. As an added benefit, Python that ships with some OSes (like CachyOS) is more optimized that what podman would pull.

                          Podman is great if security is a concern though. AKA if you don't 'trust' the code of the engine runtimes.

                          ST is good, though its sampling presets are kinda funky and I don't use it personally.

                          1 Reply Last reply
                          1
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups