I know you're too poor to get duplicate storage but it's still time to back up your data Jow Forums

I know you're too poor to get duplicate storage but it's still time to back up your data Jow Forums.
Excerpt from buttblasted Jow Forums user in denial:
"I lost all my data, nothing important really (non-work related), but I really wish I could recover them."

Attached: 3-2-1-Backup-Rule.png (1079x532, 63K)

i backup in triplicate monthly
>2 external hdds
>one flash drive with the more important stuff

That rule is, in my own experience, less important than having ONE reliable copy that is actually being automatically updated and occasionally verified for correctness and completeness.

You might consider that included in the rule, but frankly, usually it's the thing that isn't being done.

>one external HDD at home
>one magnetic tape in bank
>one SD card in my ass
>hashes of all files to protect against corruption

Attached: 1538478945548.png (800x778, 328K)

I have 40tb in use on my server in an unraid array.
How do you suppose I back that up cheaply and effectively.

(cont'd)
A good way to deal with this for Linux/BSD/... users is to just set up borg in some variant of this:
set -e
BORG_DIR="/path/to/where/you/want/backup"

borg create --stats --show-rc --compression zstd "$BORG_DIR"::'important-{now}' /etc /somethingelse /otherthings

borg create --stats --show-rc --compression zstd "$BORG_DIR"::'home-{now}' /home

borg check "$BORG_DIR"

borg prune --list --show-rc --keep-daily 7 --keep-weekly 4 --keep-monthly 3 --keep-yearly 3 "$BORG_DIR" --prefix 'important-'
borg prune --list --show-rc --keep-daily 7 --keep-weekly 4 --keep-monthly 3 --keep-yearly 3 "$BORG_DIR" --prefix 'home-'


Borg can back up to most destinations you can imagine, and it should be obvious how you manipulate the compression, backup pruning scheme and so on.

If any of you fa/g/s have stuff worth keeping you can afford crashplan. $10/mo and competely unlimited. Versioning, remote retrieval, and it really is unlimited (I have 10Ts up there now).

I use weekly backups to an offline drive and that's it.
Fuck the offsite botnet clouds.

0-0-0 been going strong for years. I've never had a hard drive failure.

the absolute madtranny

All of my family photos / music / nostalgic files have been burnt to blu ray. I have not come across too many people who use blu ray for backup but I think it's a great solution

>4TB and 2TB external hard drives
>Both big enough to hold everything
>Careful about not keeping either one running past when it's actively being used

I'm fine.

It's probably okay if you don't have too many photos to exceed a handful of blurays.

Once you use modern cameras and shoot a decent amount, it'll increasingly become annoying to keep track of what's where and/or keep it updated, depending on the scheme you're using.

Verifying it's all still intact [in duplicate maybe, or just with the reed-solomon coding/checksums] also takes time.

I have always and will continue to use my RAID-like solution.
The risk is intentional because my personal data is not more important than the principle of being able to accept loss or give up on things that aren't actually important.
The mitigations I have in place have never actually been proven effective and are likely overboard to begin with.
All I have been doing for decades now is just having a ZFS pool in mirror-parity.
The other dat I was looking at multiple datasets that I have built over years taking up terabytes on terabytes of space thinking I should just delete it all.

I bet my entire life's work, anything of importance, fits in less than 200GB, probably less. That'd be worth backing up if it wasen't already distributed. Everything else should likely be purged.

I just drag n drop my home folder onto an external drive

>I know you're...

Attached: Bar Black Sheep - The Japan Times.jpg (870x652, 65K)

This is a complex feeling when you have mitigations in place. It's good that things are more reliable now, but also you wasted time and money on the basis that they're not reliable when they actually are.

running a backup to a separate zfs array now.

Attached: Screenshot_20190204_124639.png (410x155, 77K)

Maybe I will when my data is actually worth something, for now periodically putting my project on one drive is good enough

I don't have an offsite backup because if it came to losing local hardware in e.g. a housefire, I'd have bigger things to worry about ultimately.
Also, anyone know to to safely make automatic backups without corrupting files like open sqlite databases? If I rsync automatically, my firefox profile gets corrupted over time. So I just do so manually and make sure FF is closed.

Probably best to actually use sql tools to handle the clone if the DB isn't completely locked from access.

But the feeling of a disaster happening and then act like nothing happened because you had an offline backup is glorious.

>I've never had a hard drive failure.
The first time it happens will be awesome.