Help with Home Server Architecture and Hardware Selection?
-
[email protected]replied to [email protected] last edited by
It’s easy to feel like you know nothing when A) there’s seemingly infinite depth to a skill and B) there are so many options.
You’re letting perfection stop you from starting my dude. Dive in!
-
[email protected]replied to [email protected] last edited by
Oh damn that’s YOUR photo lmao
-
[email protected]replied to [email protected] last edited by
Thanks for this! The jet engine sound level and higher power draw were both what made me a little wary of used enterprise stuff (plus jumping from never having a home server straight to rack mounted felt like flying a little too close to the sun). And thanks also for the epyc rec; based on other comments it sounds like maybe pairing that with dual 3090s is the most cost effective option (especially because I fear you're right on prices not being adjusted downward; not sure if the big hit Nvidia took this morning because of DeepSeek might change things but I suppose that ultimately unless underlying demand drops, why would they drop their prices?) Thanks again for taking the time to respond!
-
[email protected]replied to [email protected] last edited by
@cm0002 @aberrate_junior_beatnik That looks like a 15A receptacle (https://www.icrfq.net/15-amp-vs-20-amp-outlet/). If it was installed on a 20A circuit (with a 20A breaker and wiring sized for 20A), then the receptacle was the weak point. Electricians often do this with multiple 15A receptacles wired together for Reasons (https://diy.stackexchange.com/questions/12763/why-is-it-safe-to-use-15-a-receptacles-on-a-20-a-circuit) that I disagree with for exactly what your picture shows. That said, overloading it is not SUPER likely to cause a fire - just destroy the outlet and appliance plugs.
-
[email protected]replied to [email protected] last edited by
Thanks, will do!
-
[email protected]replied to [email protected] last edited by
That's the solution I take. I use Proxmox for a Windows VM which runs Ollama. That VM can then be used for gaming in the off chance a LLM isn't loaded. It usually is. I use only one 3090 due to the power load of my two servers on top of my [many] HDDs. The extra load of 2 isn't something I want to worry about.
I point to that machine through LiteLLM* which is then accessed through nginx which allows only Local IPs. Those two are in a different VM that hosts most of my docker containers.
*I found using Ollama and Open WebUI causes the model to get unloaded since they send slightly different calls. LiteLLM reduces that variance.
-
[email protected]replied to [email protected] last edited by
Thanks so much for flagging that, the above 4g decoding wasn't even on my radar. And I think you and another commenter have sold me on trying for an EPYC mobo and dual 3090 combination. If you don't mind my asking, did you get your 3090's new or used? I feel like used is the way to go from a cost perspective, but obviously only if it wasn't used 24/7 in a mining rig for years on end (and I am not confident in my ability to make a good call on that as of yet. I guess I'd try to get current benchmarks and just try to visually inspect from photos?) But thanks again!
-
[email protected]replied to [email protected] last edited by
Yea, I keep the outlet around as a reminder lol
-
[email protected]replied to [email protected] last edited by
Thank you! I think I am just at the "Valley of Despair" portion of the Dunning-Kruger effect lol, but the good news is that it's hopefully mostly up from here (and as you say, a good finished product is infinitely better than a perfect idea).
-
[email protected]replied to [email protected] last edited by
They're the best site around for high quality/capacity drives that don't cost an arm and a leg. Another great resource for tools n' stuff is awesomeselfhosted
Website: https://awesome-selfhosted.net/
Github: https://github.com/awesome-selfhosted/awesome-selfhosted
-
[email protected]replied to [email protected] last edited by
Makes sense, this was also years ago so small details are being forgotten, could have also been a 15 or possibly 20. It was one circuit split between 2 rooms, which was the norm apparently for the time it was built in the early 80s (and not a damn thing was ever upgraded, including the outlets)
It was also a small extinguisher handleable fire, but it was enough to be scary AF LMAO
-
[email protected]replied to [email protected] last edited by
Don't worry about how a video card was used. Unless it was handled by howtobasic, they're gonna break long after they're obsolete. You might worry about a bad firmware setup, but you avoid that by looking at the seller rating, not the video card.
there's an argument to be made that a mining gpu is actually the better card to buy since they never went hot>cold>hot>cold (thus stressing the solder joints) like a regular user would do. But it's just that; an argument. I have yet to find a well researched article on the effects of long-term gaming as compared to long term mining, but I can tell you that the breaking point for either is long after you would have kept the card in use, even second or third hand.
-
[email protected]replied to [email protected] last edited by
So, I'm a rabid selfhoster because I've spent too many years watching rugpull tactics from every company out there. I'm just going to list what I've ended up with, and it's not perfect, but it is pretty damn robust. I'm running pretty much everything you talk about except much in the way of AI stuff at this point. I wouldn't call it particularly energy efficient since the equipment isn't very new. But take a read and see if it provokes any thoughts on your wishlist.
My Machine 1 is a Proxmox node with ZFS storage backing and machine 2 is mirror image but is a second Proxmox node for HA. Everything, even my OPNsense router runs on Proxmox. My docker/k8s hosts are LXCs or VMs running on the nodes, and the nodes replicate nearly everything between them as a first level, fast recovery backup/high availability failover. I can then live migrate guests around very quickly if I want to upgrade and reboot or otherwise maintain a node. I can also snapshot guests before updates or maintainance that I'm scared will break stuff. Or if I'm experimenting and like to rollback when I fuck up.
Both nodes are backed up via Proxmox Backup Server for any guests I consider prod, and I take backups every hour and keep probably 200 backups at various intervals and amounts. These dedup in PBS so the space utilization for all these extra backups is quite low. I also backup via PBS to removable USB drives on a longer schedule, and swap those out offsite weekly. Because I bind mount everything in my docker compose stacks, recovering a particular folder at a point in time via folder restore lets me recover a stack quite granularly. Also, since it's done as a ZFS snapshot backup, it's internally consistent and I've never had a db-file mismatch issue that didn't just journal out cleanly.
I also zfs-send critical datasets via syncoid to zfs.rent daily from each proxmox node.
Overall, this is highly flexible and very, very bulletproof over the last 5 or 6 years. I bought some decade old 1-U dell servers with enough drive bays and dual xeons, so I have plenty of threads and ram and upgraded to IT-mode 12G SAS RAID cards , but it isn't a powerhouse server or anything, I might be $1000 into each of them. I have considered adding and passing through an external GPU. The PBS server is a little piece of trash i3 with a 8TB sata drive and a GB NIC in it.
-
[email protected]replied to [email protected] last edited by
I’m rocking 4 used ones from 4 different people.
So far, all good
You can’t buy 3090’s new anymore anyways.
4090’s are twice as much for 15% better perf, and the 5090’s will be ridiculous prices.
2x3090 is more than enough for basic inference, I have more for training and fine tuning.
You want epyc/threadrupper etc.
You want max pcie lanes.
-
[email protected]replied to [email protected] last edited by
Thanks so much for all of this info! You're almost certainly correct that I'm overthinking this (it's definitely a talent of mine). I had been leaning z2 on the NAS only because I'd heard that the resilvering process can be somewhat intensive on the drives, especially when they're larger, but I had also seen folks say that this was probably overkill for most home settings and so I'm glad someone with experience on it could chime in. I think my biggest takeaway from what you shared is that it sounds like keeping the file system baremetal and fiddling with it as little as possible is the strategy. And I think you're totally right on the LLMs being the real sticking point; I'd had no idea just how resource intensive they were not just to train but even to operate until I started looking into running one locally. It's honestly making me think that maybe trying to roll this out in phases starting with the NAS (while also doing some other infrastructure upgrades like looking at running cat 6a and swapping out my router from the ISP all-in-one to something that can run OPNSense paired with some WAPs), might be a better place to start. Then, if I can get some early successes under my belt, I can move onto the LLM arena and see how much time, money, and tears I want to spend getting that up and running. Oh, and thanks also for mentioning TiB; it sent me down a very interesting rabbit hole on the base 10 vs. base 2 byte measurement and how drive companies use the difference to pump up the number they get to advertise; I had no idea that accounting for the discrepancy in drive size, but is definitely not surprising.
-
[email protected]replied to [email protected] last edited by
I would definitely scale things out slowly. While the NAS will eventually be the cornerstone of your setup it will be an investment. You could also try setting up a cheap server as a stand-alone to get the feel for running applications. Maybe even as cheap as a Raspberry PI or small single-board system. Some of them have pretty decent specs at very affordable costs.
There are sometimes ways to upgrade a RAID later. In one scenario I replaced the drives one at a time with larger drives and created a second RAID on the same disks (in a second partition). Wasn't a great idea perhaps - but it worked! I just expanded my LVM pool to the new RAID and was off to the races. I'm sure performance was hit with two RAIDs on the same disks - but it did the job and worked well enough for me.
I'm not as familiar with zfs to know what options it has for expansion. With MD these days I think you can just fail and replace each disk one-by-one and expand the raid to the new size once they're all replaced. MD can be pretty picky about drives having exactly the same number of sectors though so care must be taken to use the same disks or partition a bit smaller than the drive... Waiting for each disk to sync can take ages but it's possible. There may be other options for ZFS (scaling with more disks maybe?).
Good luck with your project!
-
[email protected]replied to [email protected] last edited by
for high vram ai stuff it might be worth waiting and seeing how the 24gb b580 variant is
Intel has a bunch of translation layer sort of stuff though that I think generally makes it easy to run most CUDA ai things on it, but I'm not sure if common ai software supports multi gpu with it though
IDK how cash limited you are but if it's just the vram you need and not necessarily the tokens/sec it should be a much better deal when it releases
Not entirely related but I have a full half hourly shapshotted computer backup going to a large HDD in my home server using Kopia, its very convenient and you don't need to install anything on the server except a large drive and the ability to use ssh/sftp (or another method, it supports several). It supports many compression formats and also avoids storing duplicate data. I haven't needed to use it yet, but I imagine it could become very useful in the future. I also have the same set up in the cli on the server, largely so I can roll back in case some random person happens upon it and decides to destroy everything in my Minecraft server (which is public and doesn't have a whitelist...). It's pretty easy to set up and since it can back up over the internet, its something you could easily use for a whole family.
My home server (with a bunch of used parts plus a computer from the local university surplus store) was probably about ~170$ in total (i7 6700, 16gb ddr4, 256gb ssd, 8tb hdd) and is enough to host all of the stuff I have (very light modded MC with geyser, a gitlab instance, and the backup) very easily, but it is very much not expandable (the case is quite literally tiny and I don't have space to leave it open, I could get a pcie storage controller but the psu is weak and there aren't many sata ports), probably not all that future proof either, and definitely isn't something I would trust to perform well with AI models.
this is the hdd I got, I did a lot of research and they're supposed to be super reliable. I was worried about noise, but after getting one I can say that as long as it isn't within 4 feet of you you'll probably never hear it.
Anyways, it's always nice to really do something the proper way and have something fully future proof, but if you just need to host a few light things you can probably cheap out on the hardware and still get a great experience. It's worth noting that a normal Minecraft server, backups, and a document editor for example are all things that you can run on a Raspberry Pi if you really wanted to. I have absolutely no experience using a NAS, metasearch, or heavy mods however, those might be a lot harder to get fast for all I know.
-
[email protected]replied to [email protected] last edited by
Thanks so much for sharing! I just poked around for the Ironwolf 8TB drives I was thinking of an it unfortunately looks like they're sold out for now (as are the 8TB WD Reds it looks like), but I'll definitely keep an eye out for them here (and honestly maybe explore some different size options honestly; the drive costs I was seeing on other sites was more than I expected, but wasn't sure if that was just the new normal; glad to have another option!) And thanks so much for the awesomeselfhosted list!! I don't think I'd seen everything collected in one place like that before, that will be super helpful!
-
[email protected]replied to [email protected] last edited by
This is super interesting, thanks so much for sharing! In my initial poking around, I'd seen a lot of people that suggested virtualizing TrueNAS within Proxmox was a bit of a headache (especially when something inevitably goes wrong and everything goes down), but I hadn't considered cutting out TrueNAS entirely and just running directly on Proxmox and pairing that virtualization with k8s and robust backups (I am pleasantly shocked that PBS can manage that many backups without it eating up crazy amounts of space). After the other comments I was sort of aligning around starting off with a TrueNAS build and then growing into some of the LLM stuff I mentioned, but I have to admit this is really intriguing as an alternative (even if as something to work towards once I've got some initial prototypes; figuring out k8s would be a really fun project I think). Just out of curiosity, how noisy do you find the old Dell servers? I have been hesitant both because of power draw and noise, but would love to get feedback from someone who has them. Thanks so much again for taking the time to write all of this out, I really appreciate it!
-
[email protected]replied to [email protected] last edited by
(Also very curious about all of the HA stuff; it's definitely on my list of things to experiment with, but probably down the line once I've gotten some basic infrastructure in place. Very excited at the prospect though)