File Systems

What are you using? Have you not yet bought the modern cow filesystem meme and are you on volumes/partitions still?

I am currently using ZFS on the physical side of things and up until recently using XFS on a zvol for space allocation in stuff like steam and NFS shares to legacy systems, that is until btrfs stated single disk is production ready and I've been using it for first class snapshots - with the aggressive caching and my workload I don't even notice the cow on cow performance hit.

I know this may not need to be said but ext4 and ntfs fags need not apply.

Attached: Openzfs.svg.png (372x347, 20K)

How good is Btrfs?

I've tried a few file systems, and XFS is the fastest one I've used, at least on SSDs. Boot time is slightly faster, opening programs is slightly faster, opening larger programs is significantly faster. I've also noticed a huge speed increase when extracting archives, especially tar.gz which used to be slow as shit with EXT4. I think the XFS being fragile thing is a meme, since I've had two power cuts during storms in the last two years (too poor for proper UPS) and I turned my machine back on and everything was fine.

I use EXT4 on HDDs still, just because I know it works and it's the default option in GParted.

tangentally related, i recently setup bcache for my root and home and it's awesome
i've known for a while that it makes a good amount of sense to use ssd's to cache hdd's, but was put off by needing to shift data around to create the bcache backing volume, but i can now say it's worth it

the main thing i don't like about XFS is the fact it can't be shrunk
i don't often shrink volumes, but i don't want to be stuck in a situation where i need to and can't

It is ok. I don't think the tools are as good as ZFS, but better in some ways, worse in others. Feels a bit fragile maybe, since the issue where an array gets stuck in read only mode when you lose a drive is still around (can't replace drives when in read only mode). Not the biggest deal I guess, but it is very annoying. Should always keep backups if the data is important to you anyways.

I prefer ZFS, but don't want to deal with the kernel modules on Linux, for my file server, I use BTRFS.

I want to ditch ZFS and use BTRFS, but BTRFS is in a buggy state and ZFS has been around forever and is extremely resilient against random shutdowns and much more.

Right now I'm using 1x2 8TBs that's mirrored with another 1x2 8TBs. I plan to buy 1x2 12TBs at some point next year.

I don't know much about filesystems but I heard good things about ZFS. If I use Linux and I want to "sleep good in night" without any worries about my data, is there any preferred FS?

is btrfs raid56 good now? anyone have links to up to date sources that demonstrate this either way?

Yeah, I'm fine with it since I never shrink my file systems. I actually use one partition per disk as well. If I need more space or a new partition, I just add another disk. Storage is cheap enough these days.

ZFS is as safe as they come, it's not the fastest FS, nor the most flexible, but it's very safe and mature
if you want to sleep well, have a separate machine doing automated, incremental backups. ZFS/RAID isn't a backup

From my use of it, don't, from what I have experienced zfs going down on the rust is the best option - putting btrfs on top of that gets advantage of very aggressive caching and the possibility of better dedup options, while still having the same "advantages" over zfs with firstclass snapshots and the like. I don't really trust btrfs redundancy/parity as all the industry seems to be moving towards zfs for that feature and using btrfs for containers due to fcs

If your gonna be doing regular shrinking on something "high performance" like xfs I recommend using lvm2 down on the disk, and changing your practices slightly - ext4, which while shite, has so many tools like shrinking because it's a classic "put your file down on raw disk and you *will* get it back, eventually" filesystem and makes sense to shrink. With xfs I would start from smaller volumes on lvm2 or even better yet zvols, maybe 5g~20gb and grow as you fill it with games/applications as with the existence of ext4 you should really be using that for large archival stuff like film/music etc.

Also seriously look into ZFS, it's now dkms on nearly every distro if not compiled into the kernel if your Ubuntu based.

i've used ZFS before, but it's not very flexible, i don't like not being able to add a disk at a time to a raid5
currently using btrfs over mdadm, not exactly ideal, but it works
i'm keeping an eye on bcachefs

More doable if you use a plain "RAID10" setup. If you just add another vdev with 2 drives. But once added, you can't remove a vdev and you can't shrink them. You can't rebalance the array either, so you don't immediately get the performance gains of striping data across vdevs until you write new data to the filesystem.

In many ways, plain Linux software raid can be a better option. I don't do this kind of stuff often, so for me the weaknesses of ZFS are fine. Will move my data over to ZFS once I have a backup server up and running.

Don't. BTRFS has garbage volume management. If you have two+ mirrors in your pool it doesn't increase reliability at all, since BTRFS only has 2 way mirroring. If a drive from each mirror dies, on ZFS that's perfectly recoverable scenario, on BTRFS you just lost your data and will have to use their shitty recovery tools to get at whatever is left.

Since BTRFS only has 2 way mirroring, meaning any 1GB chunk exists on at most 2 devices. If 2 devices die you have lost whatever 1GB chunks were common to both disks, and you have irreparably damaged your filesystem. Whereas ZFS writes entire logical blocks to a single vdev, so as long as you don't kill a vdev your data is fine. ZFS is far more flexible, has far better data redundancy primitives.

ZFS was absolutely right to bring volume management into the filesystem level. Bcachefs looked promising but from what I can tell they're repeating the same mistake of avoiding doing volume management.

ext4 is fine for simple desktop usage? ext4 or XFS with a SSD?

Redpill me on filesystems, I use ext4 ecxlusively ever since I moved from windows for good. Why would I want anything other than it on my workstation / laptop?

I'd just go with ext4. it is plain, but it works well. In recent years, it also includes checksums, but I don't know how well that works.

there used to be a difference between ext3 and xfs but i doubt it's significant with ext4

>I know this may not need to be said but ext4 and ntfs fags need not apply.
Seriously tho, is there any reason to use anything else?

BtrFS has been treating me really well so far.

People get scared of it easily. Meanwhile I've been using it in production for years. So have Facebook, who contributed the excellent zstd compression to it.

btrfs is technically superior to ZFS (supporting adding and removing devices on the fly along with both read-only AND read-write snapshotted subvolumes) - but weirdly enough both primarily contributed to by Oracle, which is why things have been kind of slow going for both over the past few years because they prefer the GPL-incompatible-licenced one of the two (which Ubuntu are shipping anyway because their lawyers are apparently #YOLO with Oracle, which doesn't seem too sensible but that's none of my business).

Listen to the warnings. In particular: Never use raid5 or raid6 with btrfs as they were never finished or safe. Do use raid1 (which makes sure to puts two copies of every file/etc on different disks). And never, EVER fill the filesystem.

It could really use some better erasure coding (N-to-M) rather than the old raid5/6 shit, and someone tried to contribute that, but Oracle gatekeep the maintenance and are against adding it.