Btrfs Compression

Anybody have experience with Btrfs compression? I have a 16G Chromebook and I want to install Arch Linux on it. I was hoping filesystem compression would help stretch drive space a little further.

Attached: btrfs.jpg (620x485, 21K)

Other urls found in this thread:

wiki.archlinux.org/index.php/Btrfs#Compression
phoronix.com/scan.php?page=article&item=btrfs-zstd-compress&num=1
wiki.archlinux.org/index.php/Arch_boot_process#Boot_loader
twitter.com/SFWRedditImages

>>btrfs on a 16gb drive

Attached: lul.gif (500x491, 355K)

Everything's literally written in the ArchWiki:
wiki.archlinux.org/index.php/Btrfs#Compression
And if you don't know which compression algorithm to use, see this benchmark on phoronix:
phoronix.com/scan.php?page=article&item=btrfs-zstd-compress&num=1
And the bootloader ArchWiki page for compatibility reasons:
wiki.archlinux.org/index.php/Arch_boot_process#Boot_loader

use freebsd + zfs

let him use btrfs if his drive is full he can't delete files :D

bcachefs is the hot new filesystem for neets

just fidget with the block size and node count. i already use btrfs without compression and it works fine.
i already read those. i was looking for an estimate of compression ratio from somebody who uses it.
i wish but none of the bsd are hardware compatible. i often run openbsd in a vm on my laptops.
why not? i've never had that problem before.

because btrfs needs to allocate meta data to delete files

yeah i know but it's not like metadata is read-only or anything. you can delete files and associated meta-data easily. i currently use it and cleaning it up every month or two keeps it running well.

i looked into it but it doesn't provide anything i need that btrfs can't do. if it were in the mainline kernel i would consider it but it's not worth the work for me.

I use Btrfs with LZO compression on all of my root partitions. Even with LZO, the compression level is significant, I've never had any issues, and decompression is almost instantaneous

it's shit
don't use it unless you want to lose data

ZFS is barely out of the sketchy phase. Even now I have some reservations about recovery in the event of certain types of cascading failure.
>b-b-but muh backups

thanks for the info. i just found somebody's blog that showed a compression ratio of 43% with zlib which is very impressive. i was thinking of going lzo just because it's well supported and has the fastest decompression. thanks again, user.

niggers

Attached: Hans_Reiser_2005.jpg (366x524, 41K)

I used to run MurderFS3 unironically on Gentoo

BCACHEFS FGTS

>digits
it's still too much work for a fresh install on most distros. if i was using gentoo i would consider it.

>28 28 555
I for one welcome our new Bcache Filesystem overlord

tfs > all your meme shit

Attached: 1568720742184.jpg (190x266, 7K)

is zfs even for personal use? I read it's used only in servers because it's storage and CPU hungry

Dragonflybsd + hammer2

I don't know about CPU hungry, I've certainly had no issues of that sort.
>storage hungry
What does that even mean? I assume you mean ram hungry? The ram usage is kinda deceptive. Its cache (of sorts) is reported as used ram, not as cache, and is only freed when the amount of usable (free + cache) memory is running low. So it may seem to be hogging all your ram, when that's not the case. Plus you can restrict the maximum usage if you want to, something like .5G should be enough afaik.