Question about what to put on RAID and what to put on NVME
-
[email protected]replied to [email protected] last edited by
Any reason why that board? Not 100% sure what you are trying to do, but it seems like an expensive board for a home NAS. I feel like you could get more value with other hardware. Again, you don't need a raid controller these days. They are a pain to deal with and provide less protection when compared to software raid these days. It looks like the x16 can be split on that board to be 8/8, so if needed you can add an adapter to add 2 nvmes.
You can just get an HBA card and add a bunch of drives to that as well if you need more data ports.
I would recommend doing a bit more research on hardware and try and figure out what you need ahead of time. Something like an ASRock motherboard might better in this case. The epyc CPU is fine. But maybe get something with rdimm memory. I would just make sure it has a Management port like ipmi on the supermicro.
-
[email protected]replied to [email protected] last edited by
I wanted to get something with a lot of upgrade potential, and this was the cheapest option to get my foot in the door with an EPYC processor.
Also needed two PCIe slots that could do at least 8x for the hba card and Intel ARC for video streaming.
-
[email protected]replied to [email protected] last edited by
All a matter of your risk tolerance and how often the data changes.
-
[email protected]replied to [email protected] last edited by
Don't make the same mistake I did. Get a backup in place before using ZFS. Using ZFS and RAIDing your drives together makes them a singular failure point. If ZFS fucks up, you're done. The only way to mitigate this is having another copy in a different pool and preferably different machine. I got lucky that my corrupted ZFS pool was still readable and I could copy files off, but others have not been so lucky.
-
[email protected]replied to [email protected] last edited by
Yeah, I wouldn't dare.
The fact that I migrated from a 3 drive to 6 drive mdadm raid without losing anything is a damn miracle.
-
[email protected]replied to [email protected] last edited by
Oh I wasn't saying to not, I was just saying make sure you're aware of what recovery entails since a lot of raid controllers don't just write bytes to the disk and can, if you don't have spares, make recovery a pain in the ass.
I'm using MD raid for my boot SSDs and yeah, the install was a complete pain in the ass since the debian installer will let you, but it's very much in the linux sense of 'let you': you can do it, but you're figuring it out on your own.
-
[email protected]replied to [email protected] last edited by
Where I've landed now is
A) just migrate everything over so I can continue working.
B) Migrate my mdadm to ZFS
C) Buy another NVME down the road and configure it with the onboard RAID controller to prevent any sudden system downtime.
D) Configure nightly backups of anything of import on the NVME RAID to the ZFS pool.
E) Configure nightly snapshots of the ZFS pool to another webserver on-site.
F) rsync the ZFS pool to cold storage every six months and store off-site. -
[email protected]replied to [email protected] last edited by
I have run photoprism straight from mdadm RAID5 on some ye olde SAS drives with only a reduction in the indexing speed (About 30K photos which took ~2 hours to index with GPU tensorflow).
That being said I'm in a similar boat doing an upgrade and I have some warnings that I have found are helpful:
- Consumer grade NVMEs are not designed for tons of write ops, so they should optimally only be used in RAID 0/1/10. RAID 5/6 will literally start with a massive parity rip on the drives, and the default timer for RAID checks on Linux is 1 week. Same goes for ZFS and mdadm caching, just proceed with caution (ie 321 backups) if you go that route. Even if you end up doing RAID 5/6, make sure you get quality hardware with decent TBW, as sever grade NVMEs are often triple in TBW rating.
- ZFS is a load of pain if you're running anything related to Fedora or Redhat, and the performance implications from lots and lots of testing is still arguably inconclusive on a NAS/Home lab setup. Unless you rely on the specific feature set or are making an actual hefty storage node, stock mdadm and LVM will probably fulfill your needs.
- Btrfs has all the features you need but is a load of trash in performance, highly recommend XFS for file integrity features + built in data dedup, and mdadm/lvm for the rest.
I'm personally going with the NVME scheduled backups to RAID because the caching just doesn't seem worth it when I'm gonna be slamming huge media files around all day along with running VMs and other crap. For context, the 2TB NVME brand I have is only rated for 1200 TBW. That's probably more then enough for a file server, but for my homelab server it would just be caching constantly with whatever workload I'm throwing at it. Would still probably last a few years no issues, but SSD pricing has just been awful these past few years.
On a related note, Photoprism needs to upgrade to Tensorflow 2 so I don't have to compile an antiquated binary for CUDA support.
-
[email protected]replied to [email protected] last edited by
Some databases support snapshotting (which won't take the database down), and I believe that backup systems can be aware of the DBMS. I'm not a good person to ask as to best practices, because I don't admin a DBMS, but it's an issue that I do mention when people are talking about backups and DBMSes -- if you have one, be aware that a backup system is going to have to take into account the DBMS one way or another if you want to potentially avoid backing up a partially-written database.
-
[email protected]replied to [email protected] last edited by
No, because the DBMS is going to be designed to permit power loss in the middle of a write without being corrupted. It'll do something vaguely like this, if you are, for example, overwriting an existing record with a new one:
-
Write that you are going to make a change in a way that does not affect existing data.
-
Perform a barrier operation (which could amount to just syncing to disk, or could just tell the OS's disk cache system to place some restrictions on how it later syncs to disk, but in any event will ensure that all writes prior to to the barrier operation are on disk prior to those write operations subsequent to it).
-
Replace the existing record. This may be destructive of existing data.
-
Potentially remove the data written in Step 1, depending upon database format.
If the DBMS loses power and comes back up, if the data from Step #1 is present and complete, it'll consider the operation committed, and simply continue the steps from there. If Step 1 is only partially on disk, it'll consider it not committed and delete it, treat the commit as not having yet gone through. From the DBMS's standpoint, either the change happens as a whole or does not happen at all.
That works fine for power loss or if a filesystem is snapshotted at an instant in time.
However, if you are a backup program and happily reading the contents of a file, you may be reading a database file with no synchronization, and may wind up with bits of one or multiple commits -- a corrupt database after the backup is restored.
-
-
[email protected]replied to [email protected] last edited by
Very good to know! Thanks.
-
[email protected]replied to [email protected] last edited by
Thanks for the tips. I'll definitely at least start with mdadm since that's what I've already got running, and I've got enough other stuff to worry about.
Are you worried at all about bit rot? I hear that's one drawback of mdadm or raid vs. zfs.
Also, any word on when photoprism will support the Coral TPU? I've got one of those and haven't found much use for it.