Btrfs is kill

btrfs is kill

Attached: Screen Shot 2019-06-25 at 11.39.03 AM.png (1218x772, 95K)

I heard bcachefs is finally getting official release

btfrs is a copy-on-write filesystem with block level checksums to detect corruption. It will never be faster than ext2 or XFS but it will keep your data safer.

Not sure what's up with that ZFS score but it doesn't make sense. I suspect a warm cache.

Zfs does a bunch of complicated prefechting and block caching. Its pretty fast in my experience but its not as fast as xfs with dax, on my machine anyways.

zfs can do anything except not destroy your data and drive

Stop using cheap faulty RAM poorfag. Zfs isnt for your 400 dollar shit box.

lol just use FAT32

>not using fat16 instead.

>Zfs does a bunch of complicated prefechting and block caching. I
Is this why lots of ecc ram is always suggested/recommended?

Kinda. ECC is recomended because when you scrub the pool it copies working data into RAM to check it. Zfs assumes the RAM is safe.

Also forgot to mention deduplication. The dedup table is entirely stored in memory. Its important for that to stay intact.

>took years to support trim because it wasn't being used with ssds

>deduplication
Don't turn this on, it slows everything to a crawl and as you say it uses a ton of RAM. It's nowhere near worth it 99% of the time.

Sauce

Thanks for spoon feeding me.

btrfs is literally the only Linux fs that fucked up my system after a hard reboot. It doesn't even have a proper fsck like ext4.

is it btrfs or resierfs that turns you into a murderer?

Its only useful for certain apications. I use for all my lxd and docker containers since they are all most identical I save about 90 percent more than I would if I wasnt using deduplications.

How about you not keep anything of value on one drive?

Spoonfeeding further.
You can pop in a kernel parameter to checksum the memcpy of the pool when read from ARC.

The CPU costs are immense but you can operate with certainty even without ECC.

It does now.

Really? Do you have source on that? Sounds interesting.

>btrfs
>data safer

>You can pop in a kernel parameter to checksum the memcpy of the pool when read from ARC.
Does this slow it down (aside from using more CPU).

You have to double checksum everything and it's not worth it at all.

Just run a memtest on your RAM and save yourself the grief.
It's meant for debug purposes only and will murder ARC read speed and lose you all that high performance ZFS gives.

>memtest
Bad RAM isn't the only source of flipped bits. These can happen naturally due to normal usage or targeted attacks (e.g. row hammer), especially on DDR3 and older. Just get ECC memory.

That is a good use case for deduplication, but I prefer the manually initiated dedup scans that btrfs supports. No need for a huge table in RAM, no performance impact, just run every so often to look for duplication and combine blocks.

You need to do both. Without block checksums, you won't even know that a file was damaged unless you manually maintain .sfv files or something.

Once the filesystem detects damage, that's when the second copy is used. Without a redundant RAID level, btrfs and ZFS can detect corruption but can't repair it.