What do people use for a shelf-stable backup
-
This will do nothing at all. Drives don't die by rust. They usually die because the motor somehow can't get the discs to spin. Very often dry lube is the reason. That can occur if you leave the drive off too long.
-
So, I have a server that has a backup drive, automated backups, and replication to laptops as well as cloud storage in Backblaze B2. What I'm looking for is something completely separated from the automation that is a backup for if I screw up the automation, as well as a backup that a layman can access (i.e. no encryption, media that is usable by anyone). I have had some very bad experiences with flash drives but I am thinking a HDD with SATA->USB cable attached (I already have the cable).
From the other conversations in this thread mentioning many options, the hard drive option seems the best for my use case, but I've also been convinced of the benefit of printing out some physical photos as well, so my current plan is to get a big container, put a couple of mirrored hard drives in there (to validate against each other as protection against bit-rot), and print 100 photos each year to add to the container to have an extra layer of redundancy.
-
Any file systems Windows can read out-of-the-box are no good file systems. What Windows read? FAT and NTFS. Former is so basic it has no mechanisms to detect errors and bitrot and the later one is a mess.
You should stick to ext4, btrfs and zfs.If you want to make if fool-proof then add a sticker with 'bring me to a computer shop to access my content'.
-
I have considered that exact message. It does seem making it easily plug and play may be out of the question if I want the error correction capabilities.
-
Btrfs and zfs are self-healing.
You can make a script to check for errors and autocorrection yourself but that needs at least a second hdd. On both drives are the same data and a file or database with the checksums of the data. The script then compares the actual checksums of the two copies and the db checksum. If they match -> perfect. If they don't match the file where there are two matching checksum is the good one and replaces the faulty one or corrects the db entry, whichever is defect. That's it. It doesn't have to be more complicated.
-
-
Yip I think this is the setup I will want (probably both - zfs + a custom script for validation, just to be sure). Two mirrored drives. I do need to read up some about zfs mirroring to understand it a bit more but I think I have a path to follow now.
-
-
-
The printed photos are only there as an extra layer of redundancy in case everything else fails. It's ok if they get discoloured a bit, it never put me off going through my grandparents' suitcases of photos. Ideally the digital files survive, if not then at least there is something rather than nothing.
Is SSD really necessary? Everything I search up says SSDs have worse retention than HDD in cold storage. A couple TB of HDD is pretty cheap these days, and seems like a better cold storage option.
You can’t exactly make it fool-proof. Outside people will never know what you did to create your backup and what to do to access it. Who knows if the drives file system or file types are still readable after 20 years? Who knows if SATA and USB connectors are still around after that time?
Yes, so now I'm thinking a rotation cycle. About every 5 years replace the drives with new ones, copy over all data. If newer technology exists then I can move to that newer technology. This way I'm keeping it up to date as long as I can.
For example it is very likely that SATA will disappear within the next 10-15 years as hdds are becoming more and more an enterprise thing and consumers are switching to M.2 ssds.
Does this matter if I have a SATA->USB cable stored with it? Other than if USB A standards change or get abandoned for USB C, but that should be covered by the review every 5 years.
-
-
-
I've decided I should have a small number of physical prints, as extra redundancy. I'm thinking I'll print 100 each year to store with the hard drive backup.
-
I decided instead to use ZFS. Better protection than just letting something sit there. Your backups are only as good as your restores. So, if you are not testing your restores, those backups may be useless anyway.
ZFS with snapshots, replicated to another ZFS box. The replicated data also stores the snapshots and they are read-only. I have snapshots running every hour.
I have full confidence that my data is safe and recoverable.
-
-
-
Reminds me of project Silica. Media historically was more durable (stone/ ink and cloth paper, etc) but had a low data density. As density increased, so did fragility
-
If you need something which can withstand some bitrot on single drive, just use par2. As long is filesystem is readable, you can recover files even if bit of data get corrupted
-
-