Is btrfs still unstable and fragile?

Is btrfs still unstable and fragile?

Attached: Btrfs_logo-540x344.png (540x344, 6K)

Other urls found in this thread:

btrfs.wiki.kernel.org/index.php/Status
dragonflybsd.org/release52/
backplane.com
phoronix.com/scan.php?page=article&item=dragonfly-52bsd-hammer2&num=1
twitter.com/AnonBabble

only on raid56

No. Docker actually uses it as the preferred fs

BTRFS ?! MORE LIKE BTRFO LMAO !! AHAHAHAHAHA

LE CHAD NTFS LE VIRGIN BTRFS

btrfs.wiki.kernel.org/index.php/Status
It's getting patches all the time so if you have a recent kernel (4.14+) then you should be fine.
Don't use it with RAID 5 and 6 though, but that should be followed generally - those two RAID levels suck.

mfw it's been a decade and they still haven't fixed raid5/6

and if they manage to get raid5/6 working in another decade, it's prolly going to be worse than raidz1/raidz2

>butterfs is fine
>don't use it
ffs

btrfs is a virgin neet who is almost-smart but always spills his spaghetti
ntfs is the reliable retard who alwas gets a just-passing grade, but eats glue and shits on himself when you ask him to do anything outside his practiced answers

It's not completely and utterly fragile any more, but what the fuck does it offer to make it worth even considering using a non rock-solid FS?

Even if it weren't, I still wouldn't use it, since all CoW filesystems suck for any random-access files.

>mfw it's been a decade and they still haven't fixed raid5/6
I wouldn't mind if they dropped RAID 5/6 entirely. It's still has a lot of problems beyond the write hole (yes, even on ZFS).

>a very specific bug in a very specific and avoidable part of the filesystem means you should never use it at all
ok

Attached: 1482076258658.jpg (498x598, 109K)

Why would you use it when xfs and bcachefs exist?

I get there are advantages to incorporating RAID into the file systems, but do they outweigh the clean separation of block device layers?
Arbitrary combinations of block devices is one of the greatest functions of Linux tbqh senpai

>a very specific 10-year-old bug in standard setups
Quite possibly even worse shit lurks waiting to be discovered

XFS doesn't even remotely have the same feature set, and still can't be shrunk at all which terribly sucks if you use LVM like everyone not stuck in the 80s.

bcachefs has a very liitle community for the scope they target. Filesystems are hard to do, CoW filesystems even more so. ZFS was eating data for breakfast, lunch, and dinner for several years, same with BTRFS and APFS, and those have the benefit of good funding and large amount of developers. I'd be even more careful about bcachefs just because it has less eyeballs, so to speak.

That's a non-sequitur. The RAID 5/6 write hole is well understood and not nearly specific to BTRFS. It's actually inherent to the whole concept of RAID 5/6, usually solved at the controller level with a battery backup.

then why does my zfs raid 51 setup work with off-the-shelf hardware without a hiccup while btrfs still spills its spaghetti?

Works fine but be aware that in certain scenarios it is much much slower than non copy on write fs like EXT4 of XFS.

Is reiserfs still relevant?

Attached: confused banana.png (500x500, 192K)

pretty stable for normie desktop use and most of the supported raid scenarios.
been using it since 2014 or so with no issues.

that said i hope either bcachefs or xfs catch up to it because of their code quality (more maintainable and well thought).

ZFS solved this by making writes atomic with CoW - first writing the full stripe to a new location and only then changing the block pointer to it.
Why hasn't BTRFS done something similar? I don't know. I never cared too much about RAID 5/6 support because it has other conceptual problems beyond the write hole.

No. ReiserFS (a.k.a Reiser3) is old and not actively maintained and Reiser4 was never merged because it likewise has little developers after Hans Reiser went to jail for murdering his wife and his company developing the filesystem went out of business.

linux itself is unstable and fragile

>his wife
that mail order gold digger was a psyop by ((them)) to fuck over GNU/FLOSS FS development by a decade
Damn glow in the darkies fucked us in the ass for the nth time, and we just keep taking it.
Protop for the spergs changing the world via Stallman/LOSS: don't let them honeytrap yo ass, just fleshlight it till you retire from the world changing and are ready to settle down the wagecuck route.
Remember the Tor/Applelbaum SJW takeover.

It was complete replaced by ZFS because Oracle made it open source in 2005.

>Oracle
dropped

Lol, Oracle own half the IT world in patents.

literally only hear unsubstantiated claims about it.

As far as I can tell it works as intended.

they haven't fixed it like how thousands of hardware raid5|6 implementations haven't "fixed" it either. the problems with the parity code in btrfs are real problems that exist in all of them.

If you don't want write holes, don't let your hardware randomly poweroff.

"built in" thin provisioning and incremental snapshots?

one of the things I use it for is to deploy each app onto its own btrfs subvolume which i tar and snapshot daily/weekly/monthly.

it's extremely useful since i don't need to waste time calculating space management per app anymore, unlike LVM or their "not stable" thin provision solution that still requires sizing.

>As far as I can tell it works as intended.
>where is X file?

EXT4 is fine for most of the Jow Forums users. You just need a decent backup.
Once i lost my shit at a client who lost a server and did not had backup because he tough that RAID would save his furry ass.

>>where is X file?

got any proof of this tard ass? because last I did a scrub, nothing wrong has been detected.

Why are you mad? It was just a joke about BTRFS early adoption that caused some files to go missing.
Dude, if you can't read a joke without react like it was a life threating insult, you will not last long.

Attached: 1523705895651.jpg (473x604, 51K)

got any source for your stupid claim faggot

...

Oracle is probably the most relevant IT business out there. Only literal faggots think that apple actually means anything.

you realize oracle pretty much started btrfs and sponsered its adoption into the kernel right?

hell that's why RedHat and their overpaid XFS cucks are so buttmad about btrfs.

Real hardware raid cards don't have the issue though.

>XFS is redhat
Are you for real idiot?

it hasn't been an SGI project for some time. literally all the recent changes were all redhat and redhat is pushing for their Linux devmapper and XFS stack for ZFS/Btrfs feature parity.

>hardware raid cards
Those have other set of issues, the more prominent being the closed firmware and the fact that if a card dies on you and the manufacturer discontinued it or went out of business you're fucked.

Are you? RedHat adopted it as their filesystem of choice and drives most of the development.

>the fact that if a card dies on you and the manufacturer discontinued it or went out of business you're fucked.
I don't think you've used many hardware raid cards.
For reference, a modern HP P440 will support arrays from a P400, released almost 14 years ago.

Also RAID isn't a backup solution.

>btrfs
Unstable garbage with performance that is only reasonable on a new fs / benchmark situation, and quickly degrades over time to a crawl with real usage.
>zfs
Good FS with unfortunate inefficient use of resources and annoying restrictions on resizing pools which forces careful planning.
>zol
A garbage attempt by linux soybois to port ZFS to Linux. Eats your data.
>HAMMER2
The actual answer. Reliable and proper FS from Dragonfly BSD.
Stable as of the new release. 5.2: dragonflybsd.org/release52/
By Matt Dillon of Amiga 'DICE C', Linux and FreeBSD fame. Forked FreeBSD because it was taking on the wrong direction.
Matt's homepage: backplane.com

Attached: 1500746812798.png (150x167, 11K)

There's also dm-raid which supports the better-known disk formats.

>FreeBSD fame
funny how things change
these days I'd be ashamed to put FreeBSD on my resume

Why would put FreeBSD on your resume instead of *BSD?

Yeah, but when Matt was at the helm, FreeBSD was the fastest server OS. Linux was shit next to it.
Unsurprisingly, Dragonfly's now moping the floor with everything else, too. Particularly the recent network stack optimizations give it an edge over both FreeBSD and Linux.
>ashamed
I'd be too. FreeBSD is a cucked OS with a soyboi Code of Conduct, and their technical direction is basically "Let's copy linux's mistakes!" ever since the big push for the SMP fuckup that caused Matt to leave.

Attached: 1494027643907.jpg (1636x1309, 445K)

Use ext4. Don't fell for memes, if your drive is tiny.

>ext4
Is basically a minixfs 1.0 tier design with a shitload of hacks on top of it. It's crap, seriously.
If you absolutely have to use Linux, I'd suggest XFS, as that's the mainlined option that sucks less, by far.

leftie cred?

you can still layer btrfs over other things
for example, you can use mdadm under btrfs instead of it's own RAID, or put in on LVM instead of using it's own subvolumes, or put it over LUKS for encryption, etc, etc

Thank you Jow Forums. I actually feel comfortable using btrfs now.
Too late. that's the default on opensuse and I don't feel like going back to the install and changing the file system

But why would anyone want to use btrfs, ever?

for it's compression, subvolumes, snapshots, flexible raid, checksumming, etc?

Hahaha what?
I'd take ext4 over XFS any day. It's notably simpler while the difference in feature set is small. And ext4's fsck is miles better than the crap XFS comes with. Plus you can't shrink XFS at all which makes it unsuitable to use with LVM.

>hacks on top of it
Someone forgets that XFS came from IRIX and required a whole translation layer to work, along with other unholy hacks.
Oh wait, you're probably too young to actually know about it.

Sure, it has FEATURES, but have you looked at the design and implementation?
It's hideous.

>It's notably simpler
No, and by far. XFS has a simple design based on a btree. ext4 is insanely complex, thanks to all the hacks involved in forcing an ancient, inadequate design to do new tricks.

>XFS has a simple design
design != implementation
XFS is full of cruft and even non-cruft parts are more complex than in ext4. The main reason being that XFS was always developed with the "features first" mindset while ext devs exercised restraint to avoid code complexity.

>"features first" mindset
You're confusing btrfs with xfs.
XFS hasn't changed that much since IRIX times, although it has gained a few important features.

>btrfs
Is deprecated garbage. Fedora got rid of it already, I expect opensuse to follow sometime soon.

No. Ext filesystems are were designed to limit implementation complexity and increase reliability.
And again you're neglecting the implementation where there was lots of work done on XFS. The Linux port was initially rather bad and needed many hacks just to somewhat work. (Not to speak about the stuff that had to be ripped out - like the volume manager.)
Only through titanic effort was the code cleaned up and made reliable, because for years a simple power outage would corrupt the shit out of XFS volumes.

I'm aware (I've been around since Linux 2.0, using it as main since 2.2) it wasn't smooth. XFS was a 2.6.60-ish thing, and it was a clusterfuck back then. The famous power outage ^@^@^@ dataloss bug was fixed sometime around 2010. There's no reason left not to use XFS now.
>Ext filesystems are were designed to limit implementation complexity and increase reliability.
No, they weren't. Ext was simple at first, and wasn't designed to do anything but what it did back then. It evolved by piling hack upon hack, like most of the kernel.

Suse used ReiserFS for their prefered filesystem, whats your point

uh, pretty sure they recommend overlayfs over their btrfs storage driver. even have to specifically set { "storage-driver": "btrfs" } in daemon.json to use it over overlayfs.

There's no way they will ever do that unless they have to

You could have said the same for Fedora. And yet.
Btrfs is a sinking ship.

they won't because there is literally no reason to do so and people could just load it as a kmod. also I'm using fc27 right now and anaconda still allows you to make installs with btrfs.

>There's no reason left not to use XFS now.
You can't shrink it so I wouldn't put it on system volumes. Good for data that only grows, though, like NAS or production databases.

Sweetie, only RedHat dropped BTRFS. Everyone else is keeping it.

What's yours? Reiser was good while it was actively developed.

>can't shrink
I'm aware, but I've never needed to shrink a volume. Grow? Several times.
>so I wouldn't put it on `system volumes`.
I don't get this connection. What do you mean by system volumes?
>redhat
"only".
>Everybody else
opensuse... and who else?
>reiser
I remember Hans Reiser annoyed at some kernel idiot overriding him and merging absolute crap into reiser3 at some point.
I do wish wish he went with reiser4 to some BSD rather than snap. Fucking Linux.

No

Hasnt been for years.

The problem is people that keep saying it is shit use crappy distros that are years behind in patches.

Well maybe you'd be unstable and fragile too, if you had a butterface.

>opensuse... and who else?
Ubuntu, and come to think of it, every distro out there. I mean, none of them are making it the default in the way that OpenSUSE is, but at the same time, none of them are dropping it.

>I'm aware, but I've never needed to shrink a volume.
I did a few times. Usually as a temporary measure on /var, /tmp, or /home.

>I don't get this connection. What do you mean by system volumes?
In my case the volume group with the system, e.g. / /var /tmp /boot /home
None of those should get the kind of I/O that XFS is beneficial for and the bit more flexibility is welcome.

>>redhat
>"only".
You got that right. RHEL is the only distro to date that's dropping BTRFS.

>>Everybody else
>opensuse... and who else?
I remember seeing a graph of contributions from companies. I can't find it now but RedHat wasn't a big contributor.
In any case, the enterprise distros shipping it are SUSE and Ubuntu, with SLES 12 having it as the default fs.

>you should be fine
>Don't use it with RAID 5 and 6
Why the fuck would I use it over ZFS then?

>Is btrfs still unstable and fragile?
Was it ever? I've been running it for years

To be honest you shouldn't use RAID 5 or 6 anyway.

>Why the fuck would I use it over ZFS then?
Because it's native on Linux and ZoL isn't stable.

Its more because overlayfs is zero config while btrfs takes actual configuration.

No. Its used as Synology's default filesystem.

>ZFS solved this by making writes atomic with CoW - first writing the full stripe to a new location and only then changing the block pointer to it.
>Why hasn't BTRFS done something similar?
I thought that was the basic premise of CoW, and that that was what btrfs did as well?

>Why hasn't BTRFS done something similar?
btrfs is also fully CoW

Fucking glow in the dark Cia niggers
I ran one over in 1999

Canonical knows btrfs is a lost cause, that's why they ship Ubuntu with ZFS now even though it is illegal
They still support btrfs, but they are willing to break the law to have something better.

>Canonical knows btrfs is a lost cause, that's why they ship Ubuntu with ZFS now even though it is illegal
No, they ship it as a supplementary service to differentiate their brand from RedHat and SUSE.
People wanting to run ZFS now don't have to run single-purpose FreeBSD, Illumos, or Solaris machines but can run Linux like the rest of their infrastructure, thus saving on maintenance and personnel.
As you said, ZFS on Linux is legally dubious so thinking they'd abandon BTRFS to solely focus on ZFS is pretty retarded.

> (OP)
>No. Docker actually uses it as the preferred fs
Lying on the internet.. Who would have thought.

Looks like DF-BSD spent some of its donation bucks in paid shilling.

>that's the filesystem that sucks less
suckless fs when

I'm not a paid shill.
I honestly use and love Dragonfly and HAMMER2. And you should, too.
>suckless fs when
It's called HAMMER2. And it's available TODAY from your friendly neighbouring mirror.

I hope not I just converted my homeserver to use a btrfs root.

>It's still has a lot of problems beyond the write hole (yes, even on ZFS).
Like what?

>>zol
>A garbage attempt by linux soybois to port ZFS to Linux. Eats your data.
ZOL has been as good for me as ZFS on FreeBSD ever has

Did you miss the recent ZoL issue?

ReiserFS still has some interesting features; like "tail packing"; basically it can take data you don't expect to frequently access, chop it up into small pieces, and "bury" it in the unused space in partially used blocks scattered across the filesystem. Kind of like what he did to his wife.

That's a terrible description
>tail packing
Tail is the end of the file. In fsland, shit is aligned to disk blocks. But files aren't necessarily sized aligned. Thus, the last block of a file, the "tail", will tend to be an incomplete block, and space is wasted.
Tail packing just stores these tails efficiently packed together. Many such "tails" are actually small files that don't even fill a single block. A lot of space is saved, and less cache lines are accessed to reach these files.

The snapshots are pretty awesome.

>The snapshots are pretty awesome.
They've got nothing on HAMMER2 snapshots. These are the best full stop

>HAMMER2 performance
phoronix.com/scan.php?page=article&item=dragonfly-52bsd-hammer2&num=1

Attached: 1498315129436.png (1280x720, 904K)

writing modifications to a new place and updating a pointer is what CoW is about, but it's not directly related to RAID
maybe btrfs doesn't tie the idea into it's raid5/6 implementation
my guess as to why raid5/6 has been iffy in btrfs while the rest is fine, is because it's primarily made by people in big companies. the main dev currently works at facebook, and i doubt facebook has a use for raid5/6

>btrfs
>facebook
It's official: btrfs is botnet.

it was originally designed by oracle, so this should not be news to anyone

>btrfs
>oracle
>facebook
Quite the pedigree.
Absolute botnet.