Internet Archive played crucial role in tracking shady CDC data removals
-
-
It wouldn't take much; they had multiple breaches and other problems last fall, seemingly due to very avoidable reasons.
-
archive.is or their mirrors should also be used, as archive.org has proven vulnerable to takedown requests from corporations, wouldn't surprise me if they could be coerced into removing their data by USA govt request as well.
-
very avoidable reasons.
They're understaffed for the amount of work they do, and their staff are probably even more busy fighting lawsuits at the moment. Things are going to slip through the cracks, unfortunately.
-
Any idea the size of IA? Could it be packaged in some torrents and distributed to the masses for decentralized archiving? I'm guessing it's way more than I could store.
-
As of five years ago, 70 petabytes: https://blog.adafruit.com/2020/12/01/donate-to-the-internet-archive-digital-library-of-free-borrowable-books-movies-music-wayback-machine-internetarchive/
in 2012 it was 10 petabytes. Now, it's probably well over 100 petabytes. I think it well beyond the scope of torrents by now.
-
That's a bit more than my home server can handle. I could maybe take some CDC data, but definitely not the full shebang. It would be neat if someone could segment the data so we could save some more critical things.
-
Is there a way to distribute it so everyone just has parts of it? Aren't there p2p cloud storage solutions that exist?
-
Here's how to help them: https://github.com/ArchiveTeam/warrior-dockerfile
-
A couple years ago I read that Filecoin has teamed up with the internet archive to synchronize the data on the Blockchain. I'm not sure how far they are yet, but it's something that could work if it doesn't turn out to be just crypto hype in the end.
-
although could have been avoidable, it begs the question who was behind the attacks.
I think we can safely say it was Peelon Shmusk, the worlds worst spy!
-
When the internet archive was attacked a few months ago we were like "who would be dumb and mean enough to do that?". We have new suspects!
-
Oh, cool, didn't know about this, throwing it on my home lab now.
-
The problem is you'd need to split it down to an amount that people would be happy hosting and then host it multiple times in case any node goes offline.
Another comment in the thread says it's likely over 100PB today (100,000 terabytes). I'd say 4 copies (spread over different time zones) is a relatively minimal level of redundancy (people may host on machines that aren't powered all the time), and you'd get a network with the most participants at around the 150gb per node mark.
That comes to nearly 3 million participants needed
Which isn't insurmountable, but also not remotely easy to get going from nothing
-
That's not the Internet Archive; that's a separate group (ArchiveTeam). They're completely unrelated. They use the Internet Archive for storage but are otherwise completely unrelated. The data archived by Archive Team Warrior does not go into the Wayback Machine.
-
This comment from 8 months ago says 152PB: https://www.reddit.com/r/DataHoarder/comments/1cu79ke/the_archiveteam_has_a_cost_shameboard_of_the_top/l4om4m6/
-
These guys seem cool but they're not the archive.org from the op article
-
As I understand it, their data does in fact enter into the Wayback Machine. They are just also available in the direct WARC archive files(which IMO sounds beneficial to the idea of exporting in bulk to another backup host). At least that’s how their FAQ reads.
And given that they focus on web crawling, and not other arbitrary data formats that IA accepts, 2.8% of over 100 petabytes is still a respectable amount of data.
That said, help is help. If another archival project team wants me to run a worker node so they can distribute load and dodge crawler blocks, let me know, I’ve got space.
-
It's a team of volunteers who help scrape and upload things to archive.org.
-
It does go into the WaybackMachine AFAIK.