Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Selfhosted
  3. Filesystem and virtualization decisions for homeserver build

Filesystem and virtualization decisions for homeserver build

Scheduled Pinned Locked Moved Selfhosted
selfhosted
26 Posts 6 Posters 59 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • I [email protected]

    Nfs, it's good enough, and is how everyone accesses it. I'm toying with ceph or some kind of object storage, but that's a big leap and I'm not comfortable yet

    Zfs snapshot to another machine with much less horsepower but similar storage array.

    Debian boots off like a 128gb Sata ssd or something, just something mindless that makes it more stable, I don't want to f with Zfs root.

    My pool isn't encrypted, don't consider it necessary, though I've toyed with it in th past. Anything sensitive I keep on separate USB keys and duplicate them, and I use luks.

    I considered virtiofs, it's not ready for what I need, it's not meant for this use case and it causes both security and other issues. Mostly it breaks the demarcation so I can't migrate or retarget to a different storage server cleanly.

    These are good ideas, and would work. I use zvols for most of this, in fact I think I pass through a nvme drive to freebsd for its jails.

    Docker fucks me here, the volume system is horrible. I made an lxc based system with python automation to bypass this, but it doesn't help when everyone releases as docker.

    I have a simple boot drive for one reason: I want nothing to go wrong with booting, ever, everything after that is negotiable, but the machine absolutely has to show up.

    It has a decent uos, but as I mentioned earlier, I live in San Jose and have fucking pge , so weeks without power aren't fucking unheard of.

    T This user is from outside of this forum
    T This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #17

    Aight thank you so much, confirms I'm on the right path! This clarifies a lot, I'll keep the ext4 boot drive šŸ™‚

    I 1 Reply Last reply
    0
    • T [email protected]

      Right, so my aversion to live backups comes initially from Louis Rossmann's guide on the FUTO wiki where he mentions it's non trivial to reliably snapshot a running system. After a lot of looking elsewhere as well I haven't gotten much hints that it would be bad advice and I want to err on the side of caution anyway. The hypervisor is QEMU/KVM so in theory it should be able to do live snapshots afaik. But I'm not familiar enough with the consistency guarantees to fully trust it. I don't wanna wake up one day to a server crash and trying to mount the backed up qcow2 in a new system and suddenly it wouldn't work and I just lost data.

      It won't matter though as I'll just place all the important data on the zpool and back that up frequently as a simple data store. The VMs can keep doing their nightly shutdown and snapshot thing.

      ? Offline
      ? Offline
      Guest
      wrote on last edited by
      #18

      I work for a medium size enterprise as a backup architect. All of our backups are crash consistent and we’ve never had an issue.

      Windows has an easy way of dealing with this in the form of VSS. As long as the application supports it, VSS can prepare the system and application for a backup, putting it in an application-consistent state before the snapshot is taken. Unfortunately, there is no equivalent for Linux. The best you can do is pre-freeze and post-thaw scripts to put the application/OS in a backup-ready state. Really though, I wouldn’t worry too much about it. Unless you are running an in-memory database, you really don’t need to worry about application consistency. If you are running an in-memory database, take database level backups (can also be done with pre-freeze/post-thaw scripts) and back up the backups.

      Just remember to test whatever solution you end up going with, and make reminders to frequently re-test your backups. You never know what might change in a year’s time, so re-testing periodically is a good way to make sure everything is still functioning properly and make sure your data is still protected. And testing needs to be more than just making sure the VM powers on. Make sure the application can start up and function properly before calling it a successful test.

      T 1 Reply Last reply
      0
      • T [email protected]

        Hi Lemmy! First post, apologies if it's not coherent šŸ™‚

        I have a physical home server for hosting some essential personal cloud services like smart home, phone backups, file sharing, kanban, and so. I'm looking to re-install the platform as there are some shortcomings in the first build. I loosely followed the FUTO wiki so you may recognise some of the patterns from there.

        For running this thing I have a mini-pc with 3 disks, 240GB and 2x 960GB SSDs. This is at capacity, though the chassis and motherboard would in theory fit a fourth disk with some creativity, which I’m interested to make happen at some point. I also have a Raspberry Pi in the house and a separate OPNsense box for firewall/dns blocking/VPN etc that works fine as-is.

        In the current setup, I have Ubuntu Server on the 240GB disk with ext4, which hosts the services in a few VMs with QEMU and does daily snapshots of the qcow2 images onto the 960GB SSDs which are set up as a mirrored zfs pool with frequent automatic snapshots. I copy the zpool contents periodically to an external disk for offsite backup. There’s also a simple samba share set up on the pool which I thought to use for syncthing and file sharing somehow. This is basically where I’m stopping to think now if what I’m doing makes sense.

        Problems I have with this:

        • When the 240GB disk eventually breaks (and I got it second hand so it might be whatever), I might lose up to one day of data within the services such as vikunja, since their data is located on the VMs, which are qcow2 files on the server’s boot drive and only backed up daily during the night because it requires VM shutdown. This is not okay, I want RPO of max 1 hour for the data.
        • The data is currently not encrypted at rest. The threat model here is data privacy in case of theft.

        Some additional design pointers:

        • Should be able to reboot remotely in good weather.
        • I want to avoid any unreliable or ā€œstupidā€ configurations and not have insane wear on my SSDs.
        • But I do want the shiny snapshotting and data integrity features of modern filesystems for expecially my phone’s photo feed.
        • I wish to avoid btrfs as I have already committed to zfs elsewhere in the ecosystem.
        • I may want to extend the storage capacity later with mirrored HDD bulk storage.
        • I don’t want to use QEMU snapshots for reaching the RPO as it seems to require guest shutdown/hibernation to be reliable and just generally isn’t made for that. I’m really trying to make use of zfs snapshots like I already do on my desktop.

        My current thoughts revolve around the following - comments most welcome.

        • Ditch the 240GB SSD from the system to make space for a pair of HDDs later. So, the 960GB pair would have both boot and data, somehow. (I'm open to having a separate NAS later if this is just not a good idea)
        • ZFS raidz1 w/ zfs-auto-snapshot + ZVOLs + ext4 guests? Does this hurt the SSDs?
        • Or: ext4 mdadm raid1 + qcow2 guests running zfs w/ zfs-auto-snapshot? Does this make any sense at all?
        • ZFS raidz1 + qcow2 + ext4 guests? This destroys the SSDs, no?
        • In any case, native encryption or LUKS?
        • Possibly no FDE, but dataset level encryption instead if that makes it easier?
        • I plan to set up unattended reboots with the Pi as key server running something like Mandos. Passphrase would be required to boot the server only if the Pi goes down as well. So, any solution must support using a key server to boot.
        • What FS should the external backup drives have? I'm currently leaning into ZFS single disk pools. Ideally they should be readable with a mac or windows machine.
        • Does Proxmox make things any easier compared to Ubuntu? How?
        • I do need at least one VM for home assistant in any case. The rest could pretty much all run in containers though. Should I look into this more or keep the VM layer?

        I'm not afraid to do some initially complex setting up. I'm a full stack web developer, not a professional sysadmin though, so advice is welcome. I don’t want to buy tons of new shit, but I’m not severely budget limited either. I’m the only admin for this system but not the only user (family setting).

        What’s the 2025 way of doing this? I’m most of all looking any inspiration as to the ā€œwhyā€, I can figure out ways to get it done if I see the benefits.

        tldr: how to best have reliable super-frequent snapshots of a home server’s data with encryption, preferably making use of zfs.

        dbtng@eviltoast.orgD This user is from outside of this forum
        dbtng@eviltoast.orgD This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #19

        Boy. You asked about Proxmox. Nobody said anything.

        How does Proxmox make it easier? Have you used it? All sorts of ways. Like, its a full virtual infrastructure management system instead of just an OS. Proxmox loves ZFS. It does many of the things you've mentioned here.

        Proxmox does have its own backup system that can work with an NFS target or with their smart dedupe storage and replication server product. https://www.proxmox.com/en/products/proxmox-backup-server/overview

        You've got some pretty advanced ideas and perhaps have already moved beyond the Proxmox question. But if you are curious and haven't used it, spin up a server and give it a whirl.

        T dbtng@eviltoast.orgD 2 Replies Last reply
        0
        • T [email protected]

          Aight thank you so much, confirms I'm on the right path! This clarifies a lot, I'll keep the ext4 boot drive šŸ™‚

          I This user is from outside of this forum
          I This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #20

          FYI, zfs is pretty fucking fragile, it breaks a lot, especially if you like to keep your kernel up to date. The kernel abi is just unstable and it takes months to catch up.

          Which is part of why I don't trust zfs on root.

          Worst case you can sometimes recover with zfs-fuse.

          T 1 Reply Last reply
          0
          • I [email protected]

            FYI, zfs is pretty fucking fragile, it breaks a lot, especially if you like to keep your kernel up to date. The kernel abi is just unstable and it takes months to catch up.

            Which is part of why I don't trust zfs on root.

            Worst case you can sometimes recover with zfs-fuse.

            T This user is from outside of this forum
            T This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #21

            Right, thanks for the heads up!
            On the desktops I have simply installed zfs as root via the Ubuntu 24.04 installer. Then, as the option was not available in the server variant I started to think maybe that is not something that should be done šŸ˜›

            I 1 Reply Last reply
            0
            • ? Guest

              I work for a medium size enterprise as a backup architect. All of our backups are crash consistent and we’ve never had an issue.

              Windows has an easy way of dealing with this in the form of VSS. As long as the application supports it, VSS can prepare the system and application for a backup, putting it in an application-consistent state before the snapshot is taken. Unfortunately, there is no equivalent for Linux. The best you can do is pre-freeze and post-thaw scripts to put the application/OS in a backup-ready state. Really though, I wouldn’t worry too much about it. Unless you are running an in-memory database, you really don’t need to worry about application consistency. If you are running an in-memory database, take database level backups (can also be done with pre-freeze/post-thaw scripts) and back up the backups.

              Just remember to test whatever solution you end up going with, and make reminders to frequently re-test your backups. You never know what might change in a year’s time, so re-testing periodically is a good way to make sure everything is still functioning properly and make sure your data is still protected. And testing needs to be more than just making sure the VM powers on. Make sure the application can start up and function properly before calling it a successful test.

              T This user is from outside of this forum
              T This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #22

              Always a good reminder to test the backups, no I would not sleep properly if I didn't test them šŸ˜›

              Aiming to keep it simple, too many moving parts in the VM snapshots / hard to figure out best practices and notice mistakes without work experience in the area, so I'll just backup the data separately and call it a day. But thanks for the input! I don't think any of my services have in-memory db's.

              1 Reply Last reply
              0
              • dbtng@eviltoast.orgD [email protected]

                Boy. You asked about Proxmox. Nobody said anything.

                How does Proxmox make it easier? Have you used it? All sorts of ways. Like, its a full virtual infrastructure management system instead of just an OS. Proxmox loves ZFS. It does many of the things you've mentioned here.

                Proxmox does have its own backup system that can work with an NFS target or with their smart dedupe storage and replication server product. https://www.proxmox.com/en/products/proxmox-backup-server/overview

                You've got some pretty advanced ideas and perhaps have already moved beyond the Proxmox question. But if you are curious and haven't used it, spin up a server and give it a whirl.

                T This user is from outside of this forum
                T This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #23

                I guess I'll give it a spin. There seems to be a big community around it. I initially thought I might migrate later so keeping the host OS layer as thin as possible. Ubuntu was mainly an easy start as I was familiar with it from before and the spirit in this initiative is DIY over framework - but if there's a widely used solution for exactly this.. Yeah.

                1 Reply Last reply
                0
                • T [email protected]

                  Right, thanks for the heads up!
                  On the desktops I have simply installed zfs as root via the Ubuntu 24.04 installer. Then, as the option was not available in the server variant I started to think maybe that is not something that should be done šŸ˜›

                  I This user is from outside of this forum
                  I This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #24

                  It's good, but be aware you want to stick to LTS kernels or at least don't upgrade casually.

                  Arch is the worst for this, ubuntu and debian are better but still get hit.

                  https://forums.opensuse.org/t/zfs-on-tumbleweed-how-to-keep-a-working-kernel-version/151323

                  https://github.com/openzfs/zfs/issues/15759

                  https://zfsonlinux.topicbox.com/groups/zfs-discuss/T2ea24fcfd1b7778e/zfs-2-2-5-compatible-with-kernel-6-10-or-not

                  https://www.reddit.com/r/archlinux/comments/137pucy/zfs_not_compatible_with_kernel_63/

                  Hit this recently on an arch build, switched to kernel-lts and it worked, but basically once every year or so the abi breaks and zfs is dead for 3-6 months on github.com/torvalds/linux@master. Just FYI.

                  T 1 Reply Last reply
                  0
                  • dbtng@eviltoast.orgD [email protected]

                    Boy. You asked about Proxmox. Nobody said anything.

                    How does Proxmox make it easier? Have you used it? All sorts of ways. Like, its a full virtual infrastructure management system instead of just an OS. Proxmox loves ZFS. It does many of the things you've mentioned here.

                    Proxmox does have its own backup system that can work with an NFS target or with their smart dedupe storage and replication server product. https://www.proxmox.com/en/products/proxmox-backup-server/overview

                    You've got some pretty advanced ideas and perhaps have already moved beyond the Proxmox question. But if you are curious and haven't used it, spin up a server and give it a whirl.

                    dbtng@eviltoast.orgD This user is from outside of this forum
                    dbtng@eviltoast.orgD This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #25

                    Proxmox is Debian, so much of your ideas could translate directly across. That said, I try to mod the PVE server as liitle as possible.

                    Proxmox makes it so easy to spin up yet another VM or LCX to handle services with its core offerings. Also google "proxmox helper scripts" to find tteck's additional stash of ready-made LCX.

                    1 Reply Last reply
                    0
                    • I [email protected]

                      It's good, but be aware you want to stick to LTS kernels or at least don't upgrade casually.

                      Arch is the worst for this, ubuntu and debian are better but still get hit.

                      https://forums.opensuse.org/t/zfs-on-tumbleweed-how-to-keep-a-working-kernel-version/151323

                      https://github.com/openzfs/zfs/issues/15759

                      https://zfsonlinux.topicbox.com/groups/zfs-discuss/T2ea24fcfd1b7778e/zfs-2-2-5-compatible-with-kernel-6-10-or-not

                      https://www.reddit.com/r/archlinux/comments/137pucy/zfs_not_compatible_with_kernel_63/

                      Hit this recently on an arch build, switched to kernel-lts and it worked, but basically once every year or so the abi breaks and zfs is dead for 3-6 months on github.com/torvalds/linux@master. Just FYI.

                      T This user is from outside of this forum
                      T This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #26

                      Really good to know. Planned to keep using very mainstream LTS versions anyway, but this solidifies the decision. Maybe on a laptop I'll install something more experimental but that's then throwaway style.

                      1 Reply Last reply
                      0
                      • System shared this topic on
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups