This is an automated archive made by the Lemmit Bot.
The original was posted on /r/datahoarder by /u/Special_Agent_Gibbs on 2024-11-08 13:22:19+00:00.
Does anyone have advice on how data from a website, primarily file based data, can be downloaded and preserved in an automated way? The website I’m thinking of (data dot gov) has thousands of CSV files (among others) and I’d like to see those files preserved before they are potentially deleted as early as next year.
You must log in or register to comment.