Question about what to put on RAID and what to put on NVME
-
[email protected]replied to [email protected] last edited by
Doesn't this just pass the issue to when the snapshot is made? If the snapshot is created mid-database update, won't you have the same problem?
-
[email protected]replied to [email protected] last edited by
Wouldn't this require the service to go down for a few minutes every night?
-
[email protected]replied to [email protected] last edited by
Make sure, if you use hardware RAID, you know what happens if your controller dies.
Is the data in a format you can access it easily? Do you need a specific raid controller to be able to read it in the future? How are you going to get a new controller if you need it?
That's a big reason why people nudge you to software raid: if you're using md and doing a mirror, then that'll work on any damn drive controller on earth that linux can talk to, and you don't need to worry about how you're getting your data back if a controller dies on you.
-
[email protected]replied to [email protected] last edited by
I meant software RAID of course. Hardware RAIDs just cause headacehes, but fake RAIDs that are built into motherboards are a real nightmare.
-
[email protected]replied to [email protected] last edited by
Yup (although minutes seems long and depending on usage weekly might be fine). You can also combine it with updates which require going down anyway.
-
[email protected]replied to [email protected] last edited by
So I’m kind of on the fence about this. I ran a raid boot disk system like 12 years ago, and it was a total pain in the ass. Just getting it to boot after an update was a bit hit or miss.
Right now I’m leaning towards hardware nvme raid for the boot disk just to obfuscate that for Linux, but still treat it delicately and back up anything of importance nightly to a proper software raid and ultimately to another medium as well.
-
[email protected]replied to [email protected] last edited by
Sounds like a good plan to me
-
[email protected]replied to [email protected] last edited by
2 HDDs (mirrored zpool), 1 SATA SSD for cache, 32 GB RAM
First read: 120 MB/s
Read while fully cached (obviously in RAM): 4.7 GB/s -
[email protected]replied to [email protected] last edited by
-
You don't need zfs cache. Stay away from it. This isn't going to help with what you want to do anyway. Just have enough RAM.
-
You need to backup your stuff. Follow the 3-2-1 rule. RAID is not a backup.
-
Don't use hardware raids, there are many benefits to using software these days.
With that said, let's dig into it. You don't really need NVMe drives tbh. SATA is probably going to be sufficient enough here. With that said, having mirrored drives will be sufficient enough as long as you are backing up your data. This also depends on how much space you will need.
I just finished building out my backup and storage solution and ended up wanting NVMe drives for certain services that run. I just grabbed a few 1 TB drives and mirrors them. Works great and I do get better performance, even with other bottlenecks. This is then replicated to another server for backup and also to cloud backup.
You also haven't said what hardware you are currently using or if you are using any software for the raid. Are you currently using zfs? Unraid? What hardware do you have? You might be able to use a pice slot to install multiple NVMe drives in the same slot. This requires bifurcation though.
-
-
[email protected]replied to [email protected] last edited by
-
[email protected]replied to [email protected] last edited by
Doing that every day feels a bit impractical. I already do that every few months.
-
[email protected]replied to [email protected] last edited by
Current hardware is an ancient fanless motherboard from 2016. RAID6 is through mdadm. Four of the drives are through a super slow PCIe 2.0 1x card.
New motherboard (just ordered) is a supermicro H13SAE-MF which has dual nvme slots and a built in raid controller for them.
-
[email protected]replied to [email protected] last edited by
Any reason why that board? Not 100% sure what you are trying to do, but it seems like an expensive board for a home NAS. I feel like you could get more value with other hardware. Again, you don't need a raid controller these days. They are a pain to deal with and provide less protection when compared to software raid these days. It looks like the x16 can be split on that board to be 8/8, so if needed you can add an adapter to add 2 nvmes.
You can just get an HBA card and add a bunch of drives to that as well if you need more data ports.
I would recommend doing a bit more research on hardware and try and figure out what you need ahead of time. Something like an ASRock motherboard might better in this case. The epyc CPU is fine. But maybe get something with rdimm memory. I would just make sure it has a Management port like ipmi on the supermicro.
-
[email protected]replied to [email protected] last edited by
I wanted to get something with a lot of upgrade potential, and this was the cheapest option to get my foot in the door with an EPYC processor.
Also needed two PCIe slots that could do at least 8x for the hba card and Intel ARC for video streaming.
-
[email protected]replied to [email protected] last edited by
All a matter of your risk tolerance and how often the data changes.
-
[email protected]replied to [email protected] last edited by
Don't make the same mistake I did. Get a backup in place before using ZFS. Using ZFS and RAIDing your drives together makes them a singular failure point. If ZFS fucks up, you're done. The only way to mitigate this is having another copy in a different pool and preferably different machine. I got lucky that my corrupted ZFS pool was still readable and I could copy files off, but others have not been so lucky.
-
[email protected]replied to [email protected] last edited by
Yeah, I wouldn't dare.
The fact that I migrated from a 3 drive to 6 drive mdadm raid without losing anything is a damn miracle.