Proxmox vs. Debian: Running media server on older hardware
-
I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.
Here are the few features I need:
- MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
- Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
- I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.
Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?
I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.
My server runs Debian VMs in Proxmox on an i7-2600 which has a lower benchmark than the 6600k. I also used the Perfect Media Server guide, and have 2 x 8T data drives pooled with MergerFS and 1 for snapraid parity, these are passed through to the main VM from Proxmox with 'qm set'. One thing I often forget after deleting/restoring this VM is to run qm set again after restore, ensuring it has the flag to not back up those drives or else backups will fail and I have to go uncheck the backup option to fix it. If I need to spin up another VM for tinkering it's also easy enough to mount the NFS share as a volume with docker compose.
Proxmox rarely shows CPU usage go above 50%, and this handles the whole *arr stack plus usenet and torrents in a single VM and compose file. I don't even have GPU passthrough setup because the motherboard on this older rig didn't support something like IOMMU. Never had issues with Plex or Jellyfin transcoding for Chromecast.
-
I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.
Here are the few features I need:
- MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
- Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
- I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.
Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?
I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.
Proxmox is pretty much focused on ZFS, LXC containers and VMs. You want mergerFS and Docker. I say avoid Proxmox and go for Debian or another distro.
-
I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.
Here are the few features I need:
- MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
- Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
- I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.
Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?
I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.
MergerFS and SnapRAID could be good for you. It's not immediate parity like with ZFS RAID (You run a regular cronjob to calculate RAID parity) but it supports mismatched drive sizes, expansion of the pool at any time, and some other features that should be good for a media server where live parity isn't critical.
Proxmox and TrueNAS are nice because they help manage ZFS and other remote management within a nice UI but really you can just use Debian with SSH and do the same stuff. DietPi has a few nice utilities on top of Debian (DDNS manager and CLI fstab utilities, for example)but not super necessary.
Personally I use TrueNAS but I also used DietPi/Debian for years and both have benefits and it really matters what your workflow is.
Docker or LXC containers won't hurt your performance btw. There's supposedly some tiny overhead but both are designed to use the basic Linux system as much as possible: they're way faster than on WSL. For hardware acceleration it'll be deferred to the GPU for most things and there's lots of documentation to set it up. The best thing about docker is that every application is kept separate to eachother - updates can be done incrementally and rollbacks are possible too!
-
I don't know about your first need ("MergerFS") but if you find useful, I have an old Intel NUC 6i3SYH (i3-6100U) with 16Gb RAM and I was running with Windows 10 for Plex+Arr and also HomeAssistant in VirtualBox. I was running into issues until I switched to Proxmox. Now I'm running Proxmox to run Docker with a bunch of containers (plex+arr and others) and also a virtual machine which has HomeAssistant and everything was smooth. I have to say that there is a learning curve, but it's very stable.
Seconding this, I'm currently running Proxmox on 3 small NUC-type PCs (two Dell Optiplexes and a Topton from AliExpress). The Topton has a slower Celeron, the two Dells have a i5-6500 and i3-8100t and are both very snappy running a few different containers and VMs (including HomeAssistant).
-
I have 33 database servers in my homelab across 11 postgres clusters, all with automated barman backups to S3.
This stuff is all automated these days.
Ah thanks, I'll go through it!
-
The production-scale Grafana LGTM stack only runs on Kubernetes fwiw. Docker and VMs are not supported. I'm a bit surprised that Kubernetes wouldn't have enough availability to be able to co-locate your general workloads and your observability stack, but that's totally fair to segment those workloads.
I've heard the argument that "kubernetes has more moving parts" a lot, and I think that is a misunderstanding. At a base level, all computers have infinite moving parts. QEMU has a lot of moving parts, containerd has a lot of moving parts. The reason why people use kubernetes is that all of those moving parts are automated and abstracted away to reduce the daily cognitive load for us operations folk. As an example I don't run manual updates for minor versions in my homelab. I have a cronjob that runs renovate, which goes and updates my Deployments, and ArgoCD automatically deploys the changes. Technically that's a lot of moving parts to use, but it saves me a lot of manual work and thinking, and turns my whole homelab into a sort of automated cloud service that I can go a month without thinking about.
I'm not sure if container break-out attacks are a reasonable concern for homelabs. See the relatively minor concern in the announcement I made as an Unraid employee last year when Leaky Vessels happened. Keep in mind that containerd uses cgroups under the hood.
Yeah, apparmor/selinux isn't very popular in the k8s space. I think it's easy enough to use them, plenty of documentation out there; but Openshift/okd is the only distribution that runs it out of the box.
By more moving parts I mean:
Running ElasticSearch on RHEL:
- add repo and dnf install elasticsearch.
- check SELinux
- write config
- firewall-cmd to open ports.
In k8s:
- grab elasticsearch container image
- edit variables in manifest (we use helm)
- depending on if the automatically configured SVC is good, leave it alone or edit it.
- write the VS and gateway (we use Istio)
- firewall-cmd to open ports
Maybe it's just me but I find option 1 easier. Maybe I'm just lazy. That's probably the overarching reason lol
-
Not calling you out specifically OP, but can someone tell me why this is a thing on the internet?
multiple 12GB drives
GB??? I assume TB automatically when people say this but it still is a speedbreaker when I'm thinking about the post.
-
I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.
Here are the few features I need:
- MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
- Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
- I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.
Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?
I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.
Thanks everyone, I feel much better about moving forward. I'm leaning towards Proxmox at this point because I could still run Windows as a VM while playing around and setting up a new drive pool. I'd like a setup that I can gradually upgrade because I don't often have a full day to dedicate to these matters.
MergerFS still seems like a good fit for my media pool, simply only to solve an issue where one media type is filling a whole drive as another sits at 50% capacity. I've lost this data before and it was easy to recover by way of my preferred backup method (private torrent tracker with paid freeleech). A parity drive with SnapRaid might be a nice stop gap. I don't think I feel confident enough with ZFS to potentially sacrifice uptime.
My dockers and server databases, however, are on a separate SSD that could benefit from ZFS. These files are backed up regularly so I can recover easily and I'd like as many failsafes as possible to protect myself. Having my Radarr database was indispensable when I lost a media drive a few weeks ago.
-
By more moving parts I mean:
Running ElasticSearch on RHEL:
- add repo and dnf install elasticsearch.
- check SELinux
- write config
- firewall-cmd to open ports.
In k8s:
- grab elasticsearch container image
- edit variables in manifest (we use helm)
- depending on if the automatically configured SVC is good, leave it alone or edit it.
- write the VS and gateway (we use Istio)
- firewall-cmd to open ports
Maybe it's just me but I find option 1 easier. Maybe I'm just lazy. That's probably the overarching reason lol
You're not using a reverse proxy on rhel, so you'll need to also make sure that the ports you want are available, and set up a dns record for it, and set up certbot.
On k8s, I believe istio gateways are meant to be reused across services. You're using a reverse proxy so the ports will already be open, so no need to use firewall-cmd. What would be wrong with the Service included in the elasticsearch chart?
It's also worth looking at the day 2 implications.
For backups you're looking at bespoke cronjobs to either rsync your database or clone your entire 100gb disk image, compared to either using velero or backing up your underlying storage.
For updates, you need to run system updates manually on rhel, likely requiring a full reboot of the node, while in kubernetes, renovate can handle rolling updates in the background with minimal downtime. Not to mention the process required to find a new repo when rhel 11 comes out.
-
You're not using a reverse proxy on rhel, so you'll need to also make sure that the ports you want are available, and set up a dns record for it, and set up certbot.
On k8s, I believe istio gateways are meant to be reused across services. You're using a reverse proxy so the ports will already be open, so no need to use firewall-cmd. What would be wrong with the Service included in the elasticsearch chart?
It's also worth looking at the day 2 implications.
For backups you're looking at bespoke cronjobs to either rsync your database or clone your entire 100gb disk image, compared to either using velero or backing up your underlying storage.
For updates, you need to run system updates manually on rhel, likely requiring a full reboot of the node, while in kubernetes, renovate can handle rolling updates in the background with minimal downtime. Not to mention the process required to find a new repo when rhel 11 comes out.
I am using a reverse proxy in production. I just didn't mention it here.
I'd have to set up a DNS record for both. I'd also have to create and rotate certs for both.
We use LVM, I simply mounted a volume for /usr/share/elasticsearch. The VMWare team will handle the underlying storage.
I agree with manually dealing with the repo. I dont think I'd set up unattended upgrades for my k8s cluster either so that's moot. Downtime is not a big deal: this is not external and I've got 5 nodes. I guess if I didn't use Ansible it would be a bit more legwork but that's about it.
Overall I think we missed each other here.
-
I am using a reverse proxy in production. I just didn't mention it here.
I'd have to set up a DNS record for both. I'd also have to create and rotate certs for both.
We use LVM, I simply mounted a volume for /usr/share/elasticsearch. The VMWare team will handle the underlying storage.
I agree with manually dealing with the repo. I dont think I'd set up unattended upgrades for my k8s cluster either so that's moot. Downtime is not a big deal: this is not external and I've got 5 nodes. I guess if I didn't use Ansible it would be a bit more legwork but that's about it.
Overall I think we missed each other here.
Well, my point was to explain how Kubernetes simplifies devops to the point of being simpler than most proxmox or Ansible setups. That's especially true if you have a platform/operations team managing the cluster for you.
Some more details missed here would be that external-dns and cert-manager operators usually handle the DNS records and certs for you in k8s, you just have to specify the hostname in the HTTPRoute/VirtualService and in the Certificate. For storage, ansible probably simplifies some of this away, but LVM is likely more manual to set up and manage than pointing a PVC at a storageclass and saying "100Gi".
Either way, I appreciate the discussion, it's always good to compare notes on production setups. No hard feelings even in the case that we disagree on things. I'm a Red Hat Openshift consultant myself these days, working on my RHCE, so maybe we'll cross paths some day in a Red Hat environment!
-
Well, my point was to explain how Kubernetes simplifies devops to the point of being simpler than most proxmox or Ansible setups. That's especially true if you have a platform/operations team managing the cluster for you.
Some more details missed here would be that external-dns and cert-manager operators usually handle the DNS records and certs for you in k8s, you just have to specify the hostname in the HTTPRoute/VirtualService and in the Certificate. For storage, ansible probably simplifies some of this away, but LVM is likely more manual to set up and manage than pointing a PVC at a storageclass and saying "100Gi".
Either way, I appreciate the discussion, it's always good to compare notes on production setups. No hard feelings even in the case that we disagree on things. I'm a Red Hat Openshift consultant myself these days, working on my RHCE, so maybe we'll cross paths some day in a Red Hat environment!
Considering I am the operations team, just goes to show how much I have left to learn. I didn't know about the external-dns operator.
Unfortunately, my company is a bit strange with certs and won't let me handle them myself. Something to check out at home I guess.
I agree with you about the LVM. I have been meaning to set up Rook forever but never got around to it. It might still take a while but thanks for the reminder.
Wow. That must have been some work. I don't have these certs myself but I'm looking at the CKA and CKS (or whatever that's called). For sure, I loved our discussion. Thanks for your help.
-
Yes, but that's a supported way to install Proxmox.
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
-
Awww man. I hope you didn't think I was questioning you. I was just curious and I never knew that or would have guessed. I learned something. Thanks.
It's also, I find, much more widely supported on a wider variety of hardware and with easier config automation.
-
I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.
Here are the few features I need:
- MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
- Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
- I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.
Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?
I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.
use nixos! you won't regret it
-
I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.
Here are the few features I need:
- MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
- Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
- I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.
Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?
I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.
My needs are pretty similar to yours and I've recently moved back to using hypervisors after running everything from Debian to Arch to NixOS bare-metal over the last decade or so. It's so easy to bring-up/tear-down environments, which is great for testing things and pretty much the whole point of a homelab. I've got a few VMs + one LXC running on Proxmox with some headroom on a 6th gen i7, you should be fine resource wise tbh. Worth mentioning that you'll most likely need to passthrough your drives to the guest VM which is not supported via the webUI, but the config is documented on their wiki.
Overall, I'm happy with this setup and loving CoreOS as a base-OS for VMs and rootless podman containers for applications.
-
Awww man. I hope you didn't think I was questioning you. I was just curious and I never knew that or would have guessed. I learned something. Thanks.
Don't worry, I didn't
-
Considering I am the operations team, just goes to show how much I have left to learn. I didn't know about the external-dns operator.
Unfortunately, my company is a bit strange with certs and won't let me handle them myself. Something to check out at home I guess.
I agree with you about the LVM. I have been meaning to set up Rook forever but never got around to it. It might still take a while but thanks for the reminder.
Wow. That must have been some work. I don't have these certs myself but I'm looking at the CKA and CKS (or whatever that's called). For sure, I loved our discussion. Thanks for your help.
Yeah, I think you pick up things from all over the place as a consultant. I see lots of different environments and learn from them.
Ah yeah, external-dns operator is great! it's maybe a bit basic at times but its super convenient to just have A/AAAA records appear for all your loadbalancer svcs and HTTPRoutes. Saves a ton of time.
That's super unfortunate that the certs are siloed off. Maybe they can give you a NS record for a subdomain for you to use ACME on? I've seen that at some customers. Super important that all engineers have access to self-service certs, imo.
Rook is great! It definitely can be quite picky about hardware and balancing, as I've learned from trying to set it up with two nodes at home with spare hdds and ssds
Very automated once it's all set up and you understand its needs, though. NFS provisioner is also a good option for a storageclass as a first step, that's what I used in my homelab from 2021 to 2023.
Heres my rook config:
https://codeberg.org/jlh/h5b/src/branch/main/argo/external_applications/rook-ceph-helm.yaml
https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications/rook-cephUp to 3 nodes and 120TiB now and I'm about to add 4 more nodes. I probably would recommend just automatically adding disks instead of manually adding them, I'm just a bit more cautious and manual with my homelab "pets".
I'm not very far on my RHCE yet tbh
Red hat courses are a bit hard to follow
But hopefully will make some progress before the summer.
The CKA and CKS certs are great! Some really good courses for those on udemy and acloudguru, there's a good lab environment on killer.sh, and the practice exams are super useful. I definitely recommend those certs, you learn a lot and it's a good way to demonstrate your expertise.
-
My needs are pretty similar to yours and I've recently moved back to using hypervisors after running everything from Debian to Arch to NixOS bare-metal over the last decade or so. It's so easy to bring-up/tear-down environments, which is great for testing things and pretty much the whole point of a homelab. I've got a few VMs + one LXC running on Proxmox with some headroom on a 6th gen i7, you should be fine resource wise tbh. Worth mentioning that you'll most likely need to passthrough your drives to the guest VM which is not supported via the webUI, but the config is documented on their wiki.
Overall, I'm happy with this setup and loving CoreOS as a base-OS for VMs and rootless podman containers for applications.
Not to mention: Snapshots.