Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Selfhosted
  3. Proxmox vs. Debian: Running media server on older hardware

Proxmox vs. Debian: Running media server on older hardware

Scheduled Pinned Locked Moved Selfhosted
selfhosted
66 Posts 25 Posters 363 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ferawyn@lemmy.worldF [email protected]

    Proxmox is Debian. 🙂
    I do always suggest installing Debian first, and then installing Proxmox on top. This allows you to properly set up your disks, and networking as needed, as the Proxmox installer is a bit limited: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
    Once you have it up and running, have a look at the CT Templates. There's a whole set of pre-configured templates from TurnkeyLinux (again, debian+) that make it trivial to set up all kinds of services in lightweight LXC Containers.

    irmadlad@lemmy.worldI This user is from outside of this forum
    irmadlad@lemmy.worldI This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #19

    I do always suggest installing Debian first, and then installing Proxmox on top.

    Correct me if I'm wrong, but isn't Proxmox it's own OS unto itself? What would be the advantage of installing Proxmox 'on top of' Debian when it's Debian already as you pointed out?

    T 1 Reply Last reply
    0
    • B [email protected]

      I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

      Here are the few features I need:

      • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
      • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
      • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

      Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

      I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

      irmadlad@lemmy.worldI This user is from outside of this forum
      irmadlad@lemmy.worldI This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #20

      OP, I'm running Proxmox on and old Dell T320 /32gb RAM. I am not having any real issues doing so. I run Docker and a handful of Docker containers. I'm really not into the arr stack, but I wouldn't think you'd have much issue.

      1 Reply Last reply
      0
      • J [email protected]

        I'm not saying it's bad software, but the times of manually configuring VMs and LXC containers with a GUI or Ansible are gone.

        All new build-outs are gitops and containerd-based containers now.

        For the legacy VM appliances, Proxmox works well, but there's also Openshift virtualization aka kubevirt if you want take advantage of the Kubernetes ecosystem.

        If you need bare-metal, then usually that gets provisioned with something like packer/nixos-generators or cloud-init.

        L This user is from outside of this forum
        L This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #21

        Why would you install a GUI on a VM designated to run a Docker instance?

        You should take a serious look at what actual companies run. It's typically nested VMs running k8s or similar. I run three nodes, with several VMs (each running Docker, or other services that require a VM) that I can migrate between nodes depending on my needs.

        For example: One of my nodes needed a fan replaced. I migrated the VM and LXC containers it hosted to another node, then pulled it from the cluster to do the job. The service saw minimal downtime, kids/wife didn't complain at all, and I could test it to make sure it was functioning properly before reinstalling it into the cluster and migrating things back.

        J 1 Reply Last reply
        0
        • irmadlad@lemmy.worldI [email protected]

          I do always suggest installing Debian first, and then installing Proxmox on top.

          Correct me if I'm wrong, but isn't Proxmox it's own OS unto itself? What would be the advantage of installing Proxmox 'on top of' Debian when it's Debian already as you pointed out?

          T This user is from outside of this forum
          T This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #22

          You have some options that aren't in the installer e.g. full disk encryption

          irmadlad@lemmy.worldI 1 Reply Last reply
          0
          • T [email protected]

            You have some options that aren't in the installer e.g. full disk encryption

            irmadlad@lemmy.worldI This user is from outside of this forum
            irmadlad@lemmy.worldI This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #23

            hmmmm. Wouldn't you have to remove the Debian kernal and use the Proxmox kernal? Sorry, not trying to be obtuse, I just have never installed Proxmox 'on top' of Debian. I always opted for the clean install.

            T 1 Reply Last reply
            0
            • possiblylinux127@lemmy.zipP [email protected]

              You are going to what, install Kubernetes on every node?

              It is far easier and more flexible to use VMs and maybe some VM templates and Ansible.

              J This user is from outside of this forum
              J This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #24

              Yes.

              It is not easier to use Ansible. My customers are trying to get rid of Ansible.

              1 Reply Last reply
              0
              • L [email protected]

                Why would you install a GUI on a VM designated to run a Docker instance?

                You should take a serious look at what actual companies run. It's typically nested VMs running k8s or similar. I run three nodes, with several VMs (each running Docker, or other services that require a VM) that I can migrate between nodes depending on my needs.

                For example: One of my nodes needed a fan replaced. I migrated the VM and LXC containers it hosted to another node, then pulled it from the cluster to do the job. The service saw minimal downtime, kids/wife didn't complain at all, and I could test it to make sure it was functioning properly before reinstalling it into the cluster and migrating things back.

                J This user is from outside of this forum
                J This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #25

                I'm a DevOps/ Platform Engineering consultant, so I've worked with about a dozen different customers on all different sorts of environments.

                I have seen some of my customers use nested VMs, but that was because they were still using VMware or similar for all of their compute. My coworkers say they're working on shutting down their VMware environments now.

                Otherwise, most of my customers are running Kubernetes directly on bare metal or directly on cloud instances. Typically the distributions they're using are Openshift, AKS, or EKS.

                My homelab is all bare metal. If a node goes down, all the containers get restarted on a different node.

                My homelab is fully gitops, you can see all of my kubernetes manifests and nixos configs here:

                https://codeberg.org/jlh/h5b

                1 Reply Last reply
                0
                • mhzawadi@lemmy.horwood.cloudM [email protected]

                  Yes, but no. There is still a lot of places using old fashioned VMs, my company is still building VMs from an AWS ami and running ansible to install all the stuff we need. Some places will move to containers and that's great, but containers won't solve every problem

                  J This user is from outside of this forum
                  J This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #26

                  Yes, it's fine to still have VMs, but you shouldn't be building out new applications and new environments on VMs or LXC.

                  The only VMs I've seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.

                  C mhzawadi@lemmy.horwood.cloudM 2 Replies Last reply
                  0
                  • S [email protected]

                    Kubernetes is also designed for clustered workloads, so if you are mostly hosting on one or two machines, YAGNI applies.

                    I recommend people start w/ docker compose due to documentation, but I personally am switching to podman quadlets w/ rootless containers.

                    J This user is from outside of this forum
                    J This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #27

                    Yeah, definitely true.

                    I'm a big fan of single-node kubernetes though, tbh. Kubernetes is an automation platform first and foremost, so it's super helpful to use Kubernetes in a homelab even if you only have one node.

                    S 1 Reply Last reply
                    0
                    • possiblylinux127@lemmy.zipP [email protected]

                      What are you going to run containers on? You need VMs to power everything.

                      J This user is from outside of this forum
                      J This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #28

                      I dont have any VMs running in my homelab.

                      https://codeberg.org/jlh/h5b

                      Most of my customers run their Kubernetes nodes either on bare metal, or on a cloud provisioned VM from AWS/GCP/Azure etc

                      1 Reply Last reply
                      0
                      • J [email protected]

                        Yeah, definitely true.

                        I'm a big fan of single-node kubernetes though, tbh. Kubernetes is an automation platform first and foremost, so it's super helpful to use Kubernetes in a homelab even if you only have one node.

                        S This user is from outside of this forum
                        S This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #29

                        What's so nice about it? Have you tried quadlets or docker compose? Could you give a quick comparison to show what you one like about it?

                        J 1 Reply Last reply
                        0
                        • B [email protected]

                          I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

                          Here are the few features I need:

                          • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
                          • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
                          • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

                          Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

                          I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

                          M This user is from outside of this forum
                          M This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #30

                          Not calling you out specifically OP, but can someone tell me why this is a thing on the internet?

                          multiple 12GB drives

                          GB??? I assume TB automatically when people say this but it still is a speedbreaker when I'm thinking about the post.

                          B 1 Reply Last reply
                          0
                          • J [email protected]

                            yeah, and qemu and lxc are very much legacy at this point. Stick with docker/podman/kubernetes for containers.

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #31

                            QEMU is legacy? Pray tell me how you're running VMs on architectures other than x86 on modern computers without QEMU

                            J 1 Reply Last reply
                            0
                            • J [email protected]

                              I'm not saying it's bad software, but the times of manually configuring VMs and LXC containers with a GUI or Ansible are gone.

                              All new build-outs are gitops and containerd-based containers now.

                              For the legacy VM appliances, Proxmox works well, but there's also Openshift virtualization aka kubevirt if you want take advantage of the Kubernetes ecosystem.

                              If you need bare-metal, then usually that gets provisioned with something like packer/nixos-generators or cloud-init.

                              M This user is from outside of this forum
                              M This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #32

                              Sometimes, VMs are simply the better solution.

                              I run a semi-production DB cluster at work. We have 17 VMs running and it's resilient (a different team handles VMWare and hardware)

                              J 1 Reply Last reply
                              0
                              • B [email protected]

                                I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

                                Here are the few features I need:

                                • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
                                • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
                                • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

                                Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

                                I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

                                glizzyguzzler@lemmy.blahaj.zoneG This user is from outside of this forum
                                glizzyguzzler@lemmy.blahaj.zoneG This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #33

                                I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

                                Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

                                Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

                                On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

                                So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

                                If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

                                Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.

                                A 1 Reply Last reply
                                0
                                • J [email protected]

                                  Yes, it's fine to still have VMs, but you shouldn't be building out new applications and new environments on VMs or LXC.

                                  The only VMs I've seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.

                                  C This user is from outside of this forum
                                  C This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #34

                                  Some of us don't build applications, we use them as built by other companies. If we're really unlucky they refuse to support running on a VM.

                                  J 1 Reply Last reply
                                  0
                                  • M [email protected]

                                    QEMU is legacy? Pray tell me how you're running VMs on architectures other than x86 on modern computers without QEMU

                                    J This user is from outside of this forum
                                    J This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #35

                                    Not QEMU in particular, poor phrasing on my part. I just mean setting up new environments that run applications on VMs.

                                    M 1 Reply Last reply
                                    0
                                    • J [email protected]

                                      Not QEMU in particular, poor phrasing on my part. I just mean setting up new environments that run applications on VMs.

                                      M This user is from outside of this forum
                                      M This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #36

                                      I prefer some of my applications to be on VMs. For example, my observability stack (ELK + Grafana) which I like to keep separate from other environments. I suppose the argument could be made that I should spin up a separate k8s cluster if I want to do that but it's faster to deploy directly on VMs, and there's also less moving parts (I run two 50 node K8S clusters so I'm not averse to containers, just saying). Easier and relatively secure tool for the right job. Sure, I could mess with cgroups and play with kernel parameters and all of that jazz to secure k8s more but why bother when I can make my life easier by trusting Red Hat? Also I'm not yet running a k8s version that supports SELinux and I tend to keep it enabled.

                                      J 1 Reply Last reply
                                      0
                                      • S [email protected]

                                        What's so nice about it? Have you tried quadlets or docker compose? Could you give a quick comparison to show what you one like about it?

                                        J This user is from outside of this forum
                                        J This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #37

                                        Sure!

                                        I haven't used quadlets yet, but I did set up a few systemd services for containers back in the day before quadlets came out. I also used to use docker compose back in 2017/2018.

                                        Docker compose and Kubernetes are very similar as a homelab admin. Docker compose syntax is a little less verbose, and it has some shortcuts for storage and networking. But that also means it's less flexible if you are doing more complex things. Docker compose doesn't start containers on boot by default I think(?) which is pretty bad for application hosting. Docker-compose has no way of automatically deploying from git like ArgoCD does.

                                        Kubernetes also has a lot of self-healing automation, like health checks that can either disable the load balancer and/or restart the container if an app is failing, automatic killing of containers when resources are low, preventing the scheduling of new containers when resources are low, gradual roll-out of containers so that the old version of a container doesn't get killed until the new version is up and healthy (helpful in case the new config is broken), mounting secrets as files in a container, and automatic retry on failed containers.

                                        There's also a lot of ubiquitous automation tools in the Kubernetes space, like cert-manager for setting up certificates (both ACME and local CA), Ingress for setting up reverse proxy, CNPG for setting up postgres clusters with automated backups, and first-class instrumentation/integration with prometheus and loki (both were designed for kubernetes first).

                                        The main downsides with Kubernetes in a homelab is that there is about a 1-2GiB RAM overhead for small clusters, and most documentation and examples are written for docker-compose, so you have to convert apps into a Deployment (you get used to writing deployments for new apps though). I would say installing things like Ingress or CNPG is probably easier than installing similar reverse-proxy automations on Docker-compose, though.

                                        1 Reply Last reply
                                        0
                                        • C [email protected]

                                          Some of us don't build applications, we use them as built by other companies. If we're really unlucky they refuse to support running on a VM.

                                          J This user is from outside of this forum
                                          J This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #38

                                          Yeah, that's fair. I have set up Openshift Virtualization for customers using 3rd party appliances. I've even worked on some projects where a 3rd party appliance is part of the original spec for the cluster, so installing Openshift Virtualization to run VMs is part of the day 1 installation of the Kubernetes cluster.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups