What do you think about ZFS?

I am looking to use it on a NAS but i dont know anything about it, so tell what Jow Forums thinks about it, please

Attached: zfs.jpg (670x413, 23K)

rtfm

where can i find tfm??

Makes chrome look like a RAM efficient masterpiece written by god himself

Don't bother setting it up on a hardware raid, since zfs needs direct disk access to check data integrity, healing, etc.
Been using a freenas at work for a few months without any issues. The boot pool got corrupted during an update and repaired itself, which was neat. Otherwise, it's fine. zfs isn't magic like the bsd fanboys would have you believe

>zfs isn't magic like the bsd fanboys would have you believe
name one other good alternative to zfs
hint: you can't

It's okay. I prefer btrfs.

It's good, but be prepared

1. you can't expand your array
2. you need a lot of money if you want more space
3. All of your unused RAM (yes, all of it) becomes used RAM, and that's with deduplication and other features disabled. This is a good thing, but it's just something that freaks people out for whatever reason because unused RAM is wasted RAM.

Hopefully BTRFS will get their shit together when it comes to RAID5/6 and we can start using that instead.

but it is magic, assuming you're using proper hardware and not AMD pajeet tier shit

if it detects a checksum error, it'll fix it
if a drive is dead, you can take it out and put in a new drive and wait for the resilvering to be done and then continue with your day as usual

ZFS updates can corrupt your entire vdev, and importing ZFS volumes is extremely buggy

it's for those truly paranoid not just about drive loss bit silent data corruption/bit rot as well.

just be wary of ZFS on Linus. that shit just had a horrible bug in the last release that orphaned files. not sure if they made utility to recover them yet.

Shit on Linux and always will be.
Fine on BSD.

Better than BTRFS for what it does but worse than XFS in a lot of respects (outside of the awesome COW and RaidZ)

Looking change from OpenFiler to FreeNAS?

>data corrupted isn't real

uhm sweety, it is, and on a much larger scale than you could imagine

>Hopefully BTRFS will get their shit together when it comes to RAID5/6 and we can start using that instead.
Actually never going to happen.
It'll take a new filesystem - BRTFS raid5/6 is fatally flawed and can't be fixed at all, it's complete writeoff.

XFS has COW and deduplication now.
MD + XFS is better than BTRFS will ever be unfortunately.
I guess we can pray to the gods of madness that Oracle decides to relicense ZFS sometime.
Or Maybe IBM will finally do something worthwhile in the FS sector.

Yes, boss tell me to search about "what people say about ZFS" and i thinking is all about that

>bit rot
good 2014 meme, almost forgot about that one

The meme I've been told about ZFS is that it's like Apple's Time Machine when it comes to backups (which would be interest to me) Is that true?

BTRFS is proof that the road to hell is paved with good intentions.

oh, it's happening to somebody, somewhere, continuously.
the question is how likely is it to happen to any individual, and how much do they really care, when they've already probably uploaded anything of value or privacy to the (((cloud))) anyway.

>the question is how likely is it to happen to any individual

every day. run a checksum on all of your files, and let it spin for 30 days, verify the files and you'll see it

ZFS on Linux is a bad idea. It's half-cooked. Just recently, there's been a major regression that caused data loss.

Yes, it has snapshot support.
History Time: Time Machine was originally based on ZFS snapshots - but after Sun sold to Oracle, Apple's relationship soured and the whole ZFS as a replacement for HFS+ project was shitcanned.
Time Machine was rejiggered to use sparse disk images and scripting to automate backto to those disk images as a last minute replacement for the snapshot support in ZFS.

The more you know.

Any CoW filesystem can do that.

there are tons of FSs that do continuous snapshotting nowadays.

ZFS's bigger features are mandatory strong checksumming and a standard delta serialization format that lets you send/receive/save/encrypt snapshots and updates.

don't do tech with that attitude

I don't even have to go all spooky shit, literally check the hard drive specs for unrecoverable read errors by the manufacturer. It's usually something like 10^14, which is some 12.5 terabytes. Better ones might be 10^15 or such. Shit ones are like 10^13.

Eventually you'll get hit, and it'll corrupt shit if you're not lucky. How lucky do you feel, and is your data worth the bother?

I thought Jobs scrapped ZFS in OS X because Sun got too cocky and announced that Apple will definitely be switching over to ZFS?

Nah it was Oracle purchase, both just happened pretty close to each other and are slightly related.
McNealy was using it prop up the share price before selling.

that's actually my point.
memes and all, bit rot is real, but for the average person, one lost sectors in 10s of TB is highly probably inconsequential, if it's even observed, 99-point-whatever percent of the time.

I actually use ZFS for archiving, but I consider it an indulgence and a learning activity more than something that actually matters.

google ran tests, and 4 out of 10 bits would get corrupted over 30 days

Instead of saying 'bit rot' which equates to a 'rotational velocitdenisty' meme use the industry term - URE, unrecoverable read error.
It's expected that from a 4TB HDD, reading out data will give 3-4 UREs even on a brand new drive with freshly written data.

It's why Raid5 is no longer considered safe with anything higher than 2TB drives (and not advised for anything over 500GB) since the URE count will likely fuck your data recovery, where RAID6 is still acceptable as the likelihood of a URE on both parity drive for the same block is lower than winning the lottery.

what about RAID10 with 4 drives? I'm using 8TB drives in a RAID10 with 4 drives, what's the likelihood of my shit going corrupt?

i was a *nix admin and zfs was best for recovery, patching etc

i once rm -rf / and still brought it back thanks to zfs cow shit

i use freebsd with zfs as a nas at home now. set it and forget it.

>8TB disks
ahahaha naive bastard

Nobody use Solaris shit.

?

I am

Attached: 1519095360976.png (134x20, 569)

Have fun with extra disk failures when you spend ages resilvering.

i have backups

mdadm + XFS

>XFS

enjoy ur bit rots

?

It's a meme, manufacturer official URE specs are several orders of magnitude worse then what they actually are on standard consumer drives.

That’s good but not a proper replacement.