need help understanding if this setup is even feasible.
-
Oh, I LOVE to talk, so I hope you don't mind if I respond with my own wall of text
It got really long, so I broke it up with headers.
TLDR: Bifurcation is needed because of how fitting multiple GPUs on one PCIe x16 lane works and consumer CPU PCIe lane management limits. Context offloading is still partial offloading, so you'll still get hit with the same speed penalty—with the exception of one specific advanced partial offloading inference strategy involving MoE models.
CUDA
To be clear about CUDA, it's an API optimized for software to use NVIDIA cards. When you use an NVIDIA card with Kobold or another engine, you tell it to use CUDA as an API to optimally use the GPU for compute tasks. In Kobold's case, you tell it to use cuBLAS for CUDA.
The PCIe bifurcation stuff is a separate issue when trying to run multiple GPUs on limited hardware. However, CUDA has an important place in multi-GPU setups. Using CUDA with multiple NVIDIA GPUs is the gold standard for homelabs because it's the most supported for advanced PyTorch fine-tuning, post-training, and cutting-edge academic work.
But it's not the only way to do things, especially if you just want inference on Kobold. Vulkan is a universal API that works on both NVIDIA and AMD cards, so you can actually combine them (like a 3060 and an AMD RX) to pool their VRAM. The trade-off is some speed compared to a full NVIDIA setup on CUDA/cuBLAS.
PCIe Bifurcation
Bifurcation is necessary in my case mainly because of physical PCIe port limits on the board and consumer CPU lane handling limits. Most consumer desktops only have one x16 PCIe slot on the motherboard, which typically means only one GPU-type device can fit nicely. Most CPUs only have 24 PCIe lanes, which is just enough to manage one x16 slot GPU, a network card, and some M.2 storage.
There are motherboards with multiple physical x16 PCIe slots and multiple CPU sockets for special server-class CPUs like Threadrippers with huge PCIe lane counts. These can handle all those PCIe devices directly at max speeds, but they're purpose-built server-class components that cost $1,000+ USD just for the motherboard. When you see people on homelab forums running dozens of used server-class GPUs, rest assured they have an expensive motherboard with 8+ PCIe x16 slots, two Threadripper CPUs, and lots of bifurcation. (See the bottom for parts examples.)
Information on this stuff and which motherboards support it is spotty—it's incredibly niche hobbyist territory with just a couple of forum posts to reference. To sanity check, really dig into the exact board manufacturer's spec PDF and look for mentions of PCIe features to be sure bifurcation is supported. Don't just trust internet searches. My motherboard is an MSI B450M Bazooka (I'll try to remember to get exact numbers later). It happened to have 4x4x4x4 compatibility—I didn't know any of this going in and got so lucky!
For multiple GPUs (or other PCIe devices!) to work together on a modest consumer desktop motherboard + CPU sharing a single PCIe x16, you have to:
- Get a motherboard that allows you to intelligently split one x16 PCIe lane address into several smaller-sized addresses in the BIOS
- Get a bifurcation expansion card meant for the specific splitting (4x4x4x4, 8x8, 8x4x4)
- Connect it all together cable-wise and figure out mounting/case modification (or live with server parts thrown together on a homelab table)
A secondary reason I'm bifurcating: the used server-class GPU I got for inferencing (Tesla P100 16GB) has no display output, and my old Ryzen CPU has no integrated graphics either. So my desktop refuses to boot with just the server card—I need at least one display-output GPU too. You won't have this problem with the 3060. In my case, I was planning a multi-GPU setup eventually anyway, so going the extra mile to figure this out was an acceptable learning premium.
Bifurcation cuts into bandwidth, but it's actually not that bad. Going from x16 to x4 only results in about 15% speed decrease, which isn't bad IMO. Did you say you're using a x1 riser though? That splits it to a sixteenth of the bandwidth—maybe I'm misunderstanding what you mean by x1.
I wouldn't obsess over multi-GPU setups too hard. You don't need to shoot for a data center at home right away, especially when you're still getting a feel for this stuff. It's a lot of planning, money, and time to get a custom homelab figured out right. Just going from Steam Deck inferencing to a single proper GPU will be night and day. I started with my decade-old ThinkPad inferencing Llama 3.1 8B at about 1 TPS, and it inspired me enough to dig out the old gaming PC sitting in the basement and squeeze every last megabyte of VRAM out of it. My 8GB 1070 Ti held me for over a year until I started doing enough professional-ish work to justify a proper multi-GPU upgrade.
Offloading Context
Offloading context is still partial offloading, so you'll hit the same speed issues. You want to use a model that leaves enough memory for context completely within your GPU VRAM. Let's say you use a quantized 8B model that's around 8GB on your 12GB card—that leaves 4GB for context, which I'd say is easily about 16k tokens. That's what most lower-parameter local models can realistically handle anyway. You could partially offload into RAM, but it's a bad idea—cutting speed to a tenth just to add context capability you don't need. If you're doing really long conversations, handling huge chunks of text, or want to use a higher-parameter model and don't care about speed, it's understandable. But once you get a taste of 15-30 TPS, going back to 1-3 TPS is... difficult.
MoE
Note that if you're dead set on partial offloading, there's a popular way to squeeze performance through Mixture of Experts (MoE) models. It's all a little advanced and nerdy for my taste, but the gist is that you can use clever partial offloading strategies with your inferencing engine. You split up the different expert layers that make up the model between RAM and VRAM to improve performance—the unused experts live in RAM while the active expert layers live in VRAM. Or something like that.
I like to talk (in case you haven't noticed). Feel free to keep the questions coming—I'm happy to help and maybe save you some headaches.
Oh, in case you want to fantasize about parts shopping for a multi-GPU server-class setup, here are some links I have saved for reference. GPUs used for ML can be fine on 8 PCI lanes (https://www.reddit.com/r/MachineLearning/comments/jp4igh/d_does_x8_lanes_instead_of_x16_lanes_worsen_rtx/)
A Threadripper Pro has 128 PCI lanes: (https://www.amazon.com/AMD-Ryzen-Threadripper-PRO-3975WX/dp/B08V5H7GPM)
You can get dual sWRX8 motherboards: (https://www.newegg.com/p/pl?N=100007625+601362102)
You can get a PCIe 4x expansion card on Amazon: (https://www.amazon.com/JMT-PCIe-Bifurcation-x4x4x4x4-Expansion-20-2mm/dp/B0C9WS3MBG)
All together, that's 256 PCI lanes per machine, as many PCIe slots as you need. At that point, all you need to figure out is power delivery.
wrote last edited by [email protected]Did you say you’re using a x1 riser though? That splits it to a sixteenth of the bandwidth—maybe I’m misunderstanding what you mean by x1.
not exactly, what i mean by x1 riser is one of these bad boys they are basically extension cords for a x1 pcie link, no bifurcation.
the thinkcenter has 1 x16 slot and two x1 slots. my idea for the whole setup was to have the 3060 i'm getting now into the x16 slot of the motherboard, so it can be used for other tasks as well if need's be; while the second 3060 would be placed in one of the x1 slots the motherboard has via the riser; since from what i managed to read it should only affect the time to first load the model. but the fact you only mentioned the x16 slot does make me worry if there is some handicap to the other two x1 slots.of course, the second card will come down the line; don't have nearly enough money for two cards and the thinkcentre :-P.
started with my decade-old ThinkPad inferencing Llama 3.1 8B at about 1 TPS
pretty mutch same story, but with the optiplex and the steam deck. come to think of it, i do need to polish and share the scripts i wrote for the steam deck, since i designed them to be used without a dock, they're a wonderful gateway drug to this hobby :-).
there’s a popular way to squeeze performance through Mixture of Experts (MoE) models.
yeah, that's a little too out of scope for me, i'm more practical with the hardware side of things, mostly due to lacking hardware to really get into the more involved stuff. though it's not out of question for the future :-).
Tesla P100 16GB
i am somewhat familiar with these bad boys, we have an older poweredge server full of them at work, where it's used for fluid simulation, (i'd love to see how it's set up, but can't risk bricking the workhorse) but the need to figure out a cooling system for these cards, plus the higher power draw made it not really feasible in my budget unfortunately.
-
I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.
My idea was to get a 3060, a pci riser and 500w power supply just for the gpu.
Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it's an sff machine.What's making me weary of going through is the specs of the 7010 itself: it's a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)
Do you think it's even worth going through?
Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don't know if it will be good enough.
wrote last edited by [email protected]I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl.
Then don't offload! Since its 3000 series, you can run an exl3 with a really tight quant.
For instance, Mistral 24B will fit in 12GB with no offloading at 3bpw, somewhere in the quality ballpark of an Q4 GGUF: https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/tfIK6GfNdH1830vwfX6o7.png
It's especially good for long context, since exllama's KV cache quantization is so good.
You can still use kobold.cpp, but you'll have to host it via an external endpoint like TabbyAPI. Or you can use croco.cpp (a fork of kobold.cpp) with your own ik_llama.cpp trellis-quantized GGUF (though you'll have to make that yourself since they aren't common... it's complicated, heh).
Point being that simply having an ampere (3000 series RTX) card can increase efficiency massively over a baseline GGUF.
-
I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl.
Then don't offload! Since its 3000 series, you can run an exl3 with a really tight quant.
For instance, Mistral 24B will fit in 12GB with no offloading at 3bpw, somewhere in the quality ballpark of an Q4 GGUF: https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/tfIK6GfNdH1830vwfX6o7.png
It's especially good for long context, since exllama's KV cache quantization is so good.
You can still use kobold.cpp, but you'll have to host it via an external endpoint like TabbyAPI. Or you can use croco.cpp (a fork of kobold.cpp) with your own ik_llama.cpp trellis-quantized GGUF (though you'll have to make that yourself since they aren't common... it's complicated, heh).
Point being that simply having an ampere (3000 series RTX) card can increase efficiency massively over a baseline GGUF.
wrote last edited by [email protected]I'll have to check exllama once i build the system, if it can fit a 24B model in 12 gb. It should give me some leeway for 13B ones. Though i feel like i'll need to quantize to exl3 myself for the models i use. Worth a try on a Container though.
Thanks for the tip.
-
I'll have to check exllama once i build the system, if it can fit a 24B model in 12 gb. It should give me some leeway for 13B ones. Though i feel like i'll need to quantize to exl3 myself for the models i use. Worth a try on a Container though.
Thanks for the tip.
wrote last edited by [email protected]You can definitely quantize exl3s yourself; the process is vram light (albeit time intense).
What 13B are you using? FYI the old Llama2 13B models don’t use GQA, so even their relatively short 4096 context takes up a lot of vram. Newer 12Bs and 14Bs are much more efficient (and much smarter TBH).
-
I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.
My idea was to get a 3060, a pci riser and 500w power supply just for the gpu.
Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it's an sff machine.What's making me weary of going through is the specs of the 7010 itself: it's a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)
Do you think it's even worth going through?
Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don't know if it will be good enough.
Would you be willing to talk about what you're intending to do with this at all? No hard feelings if you'd rather not for any reason.
For context on my request - I've been following this comm for a bit and there seems like a real committed, knowledgeable base of folks here - the dialog just in this post almost brings a tear to my eye, lol.
I work fairly adjacent to this stuff, and have a slowly growing home lab. Time is limited of course and gotta prioritize what to learn and play with - LLMs are obviously both and useful, but I haven't yet encountered a compelling use case for myself (or maybe just enough curiosity about one) to actually dive in.
Selfishly I just wish every post here would give some info about what they're up to so I can start to fill in whatever is apparently missing in my sort of "drum up fun ideas" brain subroutine, regarding this topic. Lol.
-
You can definitely quantize exl3s yourself; the process is vram light (albeit time intense).
What 13B are you using? FYI the old Llama2 13B models don’t use GQA, so even their relatively short 4096 context takes up a lot of vram. Newer 12Bs and 14Bs are much more efficient (and much smarter TBH).
right now i'm hopping between nemo finetunes to see how they fare. i think i only ever used one 8B model from Llama2, the rest is been all Llama 3 and maybe some solar based ones. unfortunately i have yet to properly dig into the more technical side of llms due to time contraints.
the process is vram light (albeit time intense)
so long as it's not interactive i can always run it at night and make it shut off the rig when it's done. power here is cheaper at night anyways
thanks for the info (and sorry for the late response, work + cramming for exams turned out to be more brutal than expected)
-
Would you be willing to talk about what you're intending to do with this at all? No hard feelings if you'd rather not for any reason.
For context on my request - I've been following this comm for a bit and there seems like a real committed, knowledgeable base of folks here - the dialog just in this post almost brings a tear to my eye, lol.
I work fairly adjacent to this stuff, and have a slowly growing home lab. Time is limited of course and gotta prioritize what to learn and play with - LLMs are obviously both and useful, but I haven't yet encountered a compelling use case for myself (or maybe just enough curiosity about one) to actually dive in.
Selfishly I just wish every post here would give some info about what they're up to so I can start to fill in whatever is apparently missing in my sort of "drum up fun ideas" brain subroutine, regarding this topic. Lol.
wrote last edited by [email protected]at the moment i'm essentially lab ratting the models, i just love to see how far i can push them, both in parameters and in compexity of request. before they break down. plus it was a good excuse to expand my little "homelab" (read: workbench that's also stuffed with old computers) form just a raspberry pi to something more beefy.
as for more "practical" (still mostly to mess around) purposes. i was thinking about making a pseudo-realistic digital radio w/ announcer, using a small model and a TTS model: that is, writing a small summary for the songs in my playlists (or maybe letting the model itself do it, if i manage to give it search capabilites), and letting them shuffle, using the LLM+TTS combo to fake an announcer introducing the songs. i'm quite sure there was already a similar project floating around on github.
another option would be implementing it in home assistant via something like willow as a frontend. to have something closer to commercial assistants like alexa, but fully controlled by the user.I’ve been following this comm for a bit and there seems like a real committed, knowledgeable base of folks here - the dialog just in this post almost brings a tear to my eye, lol.
to be honest, this post might have been the most positive interaction i've had on the web since the bbs days.
i guess the fact the communities are smaller makes it easier to cobble up people that are genuinely interested in sharing and learing about this stuff, same with the homelab community. like comparing a local coffee shop to a starbucks, it just by nature filters for different people -
right now i'm hopping between nemo finetunes to see how they fare. i think i only ever used one 8B model from Llama2, the rest is been all Llama 3 and maybe some solar based ones. unfortunately i have yet to properly dig into the more technical side of llms due to time contraints.
the process is vram light (albeit time intense)
so long as it's not interactive i can always run it at night and make it shut off the rig when it's done. power here is cheaper at night anyways
thanks for the info (and sorry for the late response, work + cramming for exams turned out to be more brutal than expected)
wrote last edited by [email protected]Yeah it’s basically impossible to keep up with new releases, heh.
Anyway, Gemma 12B is really popular now, and TBH much smarter than Nemo. You can grab a special “QAT” Q4_0 from Google (that works in kobold.cpp, but fits much more context with base llama.cpp) with basically the same performance as unquantized, would highly recommend that.
I'd also highly recommend trying 24B when you get the rig! It’s so much better than Nemo, even more than the size would suggest, so it should still win out even if you have to go down to 2.9 bpw, I’d wager.
Qwen3 30B A3B is also popular now, and would work on your 3770 and kobold.cpp with no changes (though there are speed gains to be had with the right framework, namely ik_llama.cpp)
One other random thing, some of kobold.cpps sampling presets are very funky with new models. I’d recommend resetting everything to off, then start with like 0.4 temp, 0.04 MinP, 0.02/1024 rep penalty and 0.4 DRY, not the crazy high temp sampling they normally use, with newer models than llama2.
I can host specific model/quantization on the kobold.cpp API to try if you want, to save tweaking time. Just ask (or PM me, as replies sometimes don’t send notifications).
Good luck with exams! No worries about response times, /c/localllama is a slow, relaxed community.
-
Yeah it’s basically impossible to keep up with new releases, heh.
Anyway, Gemma 12B is really popular now, and TBH much smarter than Nemo. You can grab a special “QAT” Q4_0 from Google (that works in kobold.cpp, but fits much more context with base llama.cpp) with basically the same performance as unquantized, would highly recommend that.
I'd also highly recommend trying 24B when you get the rig! It’s so much better than Nemo, even more than the size would suggest, so it should still win out even if you have to go down to 2.9 bpw, I’d wager.
Qwen3 30B A3B is also popular now, and would work on your 3770 and kobold.cpp with no changes (though there are speed gains to be had with the right framework, namely ik_llama.cpp)
One other random thing, some of kobold.cpps sampling presets are very funky with new models. I’d recommend resetting everything to off, then start with like 0.4 temp, 0.04 MinP, 0.02/1024 rep penalty and 0.4 DRY, not the crazy high temp sampling they normally use, with newer models than llama2.
I can host specific model/quantization on the kobold.cpp API to try if you want, to save tweaking time. Just ask (or PM me, as replies sometimes don’t send notifications).
Good luck with exams! No worries about response times, /c/localllama is a slow, relaxed community.
Thanks for the advice. I'll see how mutch i can squeeze out of the new rig. Especially with exl models and different frameworks.
Gemma 12B is really popular now
I was already eyeing it. But i remember the context being memory greedy due to being a multimodal model. While Qwen3 was just way out of the steam deck's capabilities.
Now it's just a matter of assembling the rig and get tinkering.Thanks again for the time and the availability
-
Thanks for the advice. I'll see how mutch i can squeeze out of the new rig. Especially with exl models and different frameworks.
Gemma 12B is really popular now
I was already eyeing it. But i remember the context being memory greedy due to being a multimodal model. While Qwen3 was just way out of the steam deck's capabilities.
Now it's just a matter of assembling the rig and get tinkering.Thanks again for the time and the availability
wrote last edited by [email protected]But i remember the context being memory greedy due to being a multimodal
No, it's super efficient! I can run 27B's full 128K on my 3090, easy.
But you have to use the base llama.cpp server. kobold.cpp doesn't seem to support the sliding window attention (last I checked like two weeks ago), so even a small context takes up a ton there.
And the image input part is optional. Delete the mmproj file, and it wont load.
There are all sorts of engine quirks like this, heh, it really is impossible to keep up with.
-
But i remember the context being memory greedy due to being a multimodal
No, it's super efficient! I can run 27B's full 128K on my 3090, easy.
But you have to use the base llama.cpp server. kobold.cpp doesn't seem to support the sliding window attention (last I checked like two weeks ago), so even a small context takes up a ton there.
And the image input part is optional. Delete the mmproj file, and it wont load.
There are all sorts of engine quirks like this, heh, it really is impossible to keep up with.
Oh ok. That changes a lot of things then :-).
I think i'll finally have to graduate to something a little less guided than kobold.cpp.
Time to read llama.cpp's and exllama's docs i guess.Thanks for the tips.
-
Oh ok. That changes a lot of things then :-).
I think i'll finally have to graduate to something a little less guided than kobold.cpp.
Time to read llama.cpp's and exllama's docs i guess.Thanks for the tips.
wrote last edited by [email protected]The LLM “engine” is mostly detached from the UI.
kobold.cpp is actually pretty great, and you can still use it with TabbyAPI (what you run for exllama) and the llama.cpp server.
I personally love this for writing and testing though:
https://github.com/lmg-anon/mikupad
And Open Web UI for more general usage.
There’s a big backlog of poorly documented knowledge too, heh, just ask if you’re wondering how to cram a specific model in. But the “jist” of the optimal engine rules are:
-
For MoE models (like Qwen3 30B), try ik_llama.cpp, which is a fork specifically optimized for big MoEs partially offloaded to CPU.
-
For Gemma 3 specifically, use the regular llama.cpp server since it seems to be the only thing supporting the sliding window attention (which makes long context easy).
-
For pretty much anything else, if it’s supported by exllamav3 and you have a 3060, it's optimal to use that (via its server, which is called TabbyAPI). And you can use its quantized cache (try Q6/5) to easily get long context.
-
-
The LLM “engine” is mostly detached from the UI.
kobold.cpp is actually pretty great, and you can still use it with TabbyAPI (what you run for exllama) and the llama.cpp server.
I personally love this for writing and testing though:
https://github.com/lmg-anon/mikupad
And Open Web UI for more general usage.
There’s a big backlog of poorly documented knowledge too, heh, just ask if you’re wondering how to cram a specific model in. But the “jist” of the optimal engine rules are:
-
For MoE models (like Qwen3 30B), try ik_llama.cpp, which is a fork specifically optimized for big MoEs partially offloaded to CPU.
-
For Gemma 3 specifically, use the regular llama.cpp server since it seems to be the only thing supporting the sliding window attention (which makes long context easy).
-
For pretty much anything else, if it’s supported by exllamav3 and you have a 3060, it's optimal to use that (via its server, which is called TabbyAPI). And you can use its quantized cache (try Q6/5) to easily get long context.
wrote last edited by [email protected]I'll have to check mikupad. For the most part i've been using sillytavern with a generic assistant card because it looked like it would allow me plenty of space to tweak stuff. Even if it's not technically meant for the more traditional assistant use case.
Thanks for the cheatsheet, it wil come really handy once i manage to set everything up. Most likely i'll use podman to make a container for each engine.
As for the hardware side. The thinkcentre arrived today. But the card still has to arrive. Unfortunately i can't really ask more questions if i can't set it all up first to see what goes wrong / get a sense of what i Haven't understood.
I'll keep you guys updated with the whole case modding stuff. I think it will be pretty fun to see come along.
Thanks for everything.
-
-
I'll have to check mikupad. For the most part i've been using sillytavern with a generic assistant card because it looked like it would allow me plenty of space to tweak stuff. Even if it's not technically meant for the more traditional assistant use case.
Thanks for the cheatsheet, it wil come really handy once i manage to set everything up. Most likely i'll use podman to make a container for each engine.
As for the hardware side. The thinkcentre arrived today. But the card still has to arrive. Unfortunately i can't really ask more questions if i can't set it all up first to see what goes wrong / get a sense of what i Haven't understood.
I'll keep you guys updated with the whole case modding stuff. I think it will be pretty fun to see come along.
Thanks for everything.
wrote last edited by [email protected]Most likely i’ll use podman to make a container for each engine.
IDK about windows, but on linux I find it easier to just make a python venv for each engine. Theres less CPU/RAM(/GPU?) overhead that way anyway, and its best to pull bleeding edge git versions of engines. As an added benefit, Python that ships with some OSes (like CachyOS) is more optimized that what podman would pull.
Podman is great if security is a concern though. AKA if you don't 'trust' the code of the engine runtimes.
ST is good, though its sampling presets are kinda funky and I don't use it personally.