Homelab upgrade - "Modern" alternatives to NFS, SSHFS?
-
I'm using ceph on my proxmox cluster but only for the server data, all my jellyfin media goes into a separate NAS using NFS as it doesn't really need the high availability and everything else that comes with ceph.
It's been working great, You can set everything up through the Proxmox GUI and it'll show up as any other storage for the VMs. You need enterprise grade NVMEs for it though or it'll chew through them in no time. Also a separate network connection for ceph traffic if you're moving a lot of data.
Very happy with this setup.
-
-
Generally yes, but it can be useful as a learning thing. A lot of my homelab use is for purposes of practicing with different techs in a setting where if it melts down it's just your stuff. At work they tend to take offense of you break prod.
-
I tried it once. NFSv4 isn't simple like NFSv3 is. Fewer systems support it too.
-
Are you having trouble reading context?
No, I'm not applying 2005 security, I'm saying NFS hasn't evolved much since 2005, so throw it in a dedicated link by itself with no other traffic and call it a day.
Yes, iscsi allows the use of mounted luns as datastores like any other, you just need to use the user space iscsi driver and tools so that iscsi-ls is available. Do not use the kernel driver and args. This is documented in many places.
If you're gonna make claims to strangers on the internet, make sure you know what you're talking about first.
-
I agree, but it’s clear that OP doesn’t want a real solution, because those apparently are boring. Instead, they want to try something new. NVMe/TCP is something new. And it still allows for having VMs on one system and storage on another, so it’s not entirely off topic.
-
-
-
Yeah, I've ended up setting up VLANS in order to not deal with encryption
-
Sure, if you have exactly one client that can access the server and you can ensure physical security of the actual network, I suppose it is fine. Still, those are some severe limitations and show how limited the ancient NFS protocol is, even in version 4.
-
-
-
No other easy option I figured out.
Didnt manage to understand iSCSI in the time I was patient with it and was desperate to finish the project and use my stuff.
Thus NFS. -
Wouldnt the sync option also confirm that every write also arrived on the disk?
If you're mounting with the NFS sync option, that'll avoid the "wait until close and probably reorder writes at the NFS layer" issue I mentioned, so that'd address one of the two issues, and the one that's specific to NFS.
-
NFS gives me the best performance. I've tried GlusterFS (not at home, for work), and it was kind of a pain to set up and maintain.
-
-
I agree as well. No reason to not use it. If there were better ways to build an alternative, one would exist.
-
I use Ceph/CephFS myself for my own 671TiB array (382TiB raw used, 252TiB-ish data stored) -- I find it a much more robust and better architected solution than Gluster. It supports distributed block devices (RBD), filesystems (CephFS), and object storage (RGW). NFS is pretty solid though for basic remote mounting filesystems.