/HSG/ Home Server General - Multi-Terabyte Edition

Discuss home servers. Large scale storage at home.
ZFS
XFS
BtrFS
Linux
BSD
NachOS

Attached: product-72376.jpg (500x481, 54K)

Other urls found in this thread:

github.com/bun-dev/twitterDL
paste.debian.net/hidden/10416e34/
freenas.org/blog/yes-you-can-virtualize-freenas/
chris.beams.io/posts/git-commit/
snapraid.it/compare
youtube.com/watch?v=pv9smNQ5fG0
jodybruchon.com/2017/03/07/zfs-wont-save-you-fancy-filesystem-fanatics-need-to-get-a-clue-about-bit-rot-and-raid-5/
twitter.com/NSFWRedditImage

>ZFS
>XFS
>BtrFS
>Linux
>BSD
>NachOS
NTFS

Is ZFS still a meme?

What is the absolute state of ZFS in 2019?
Can I rely on RAIDZx or will it still spontaneously implode?
I'm looking for solutions for storing 30-80TB of data for my dad who shoots video.

Attached: Zfs_logo.jpg (400x320, 17K)

>NTFS

Attached: 41f568ea9ce684253af465c459dde22f.jpg (500x504, 30K)

Hi I'm that faggot from the /sqt/. I want to connect a bunch of drives to a machine, and apparently the way to do that is a SAS HBA card with breakout cables. Is there any way to do this that doesn't involve fucking around with flashing firmware to the card that you got off some forum somewhere? I would very much like to be able to just buy a card, plug it in, and use it.

PERC H710 lets you change from RAID to HBA in its BIOS.

mdadm + lvm + xfs

that looks like a shit sandwich

You can buy a card with the correct firmware already on it (search for "IT mode")

Attached: d5e.jpg (200x297, 10K)

It works, but isn’t an ideal solution like ZFS. Also, it doesn’t have do COW, so you need free space for snapshots.

Posting for that guy to share more on his script/bot that does stuff from emails and post on twitter.

Where are you Jow Forumsentleman?

>booting in to efi and running 3 commands is hard
here you go retard, pic related

Attached: Screen Shot 2019-03-04 at 6.18.51 PM.png (668x196, 26K)

Where can I find a small old case for my server

Anyone here with experience with remote coding on your server? I wanna run shit out of it but I'm still not great at getting solutions for it set up and looking for tips/advice.

Remote Desktop Services to run VisualStudio as a RemoteApp

Five terabytes of useable is still multi-terabyte? Right?

I chose btrfs years ago

Attached: IMG_3063.jpg (2272x1704, 503K)

Is RaidZ (raid5) feasable for a 4 drive pool?
Currently running non-redundant jbot. Its used part as read-only archive with offsite backup as only fallback. and part as backup location for other systems.
Looking into rebuilding the whole architecture, will be buying 4 new hdd's, possibly switching to FreeBSD. (Debian now) and maybe getting a pair of pci SSD's for cashing.
Will RaidZ be a good option or am I better of running two separate raid1 arrays (either hardware or mVdev’s)

Attached: hsg2.jpg (3968x2240, 1.13M)

this, I bought my card from some chink on ebay for a song

I've been having a hard time getting that to run, I'm not sure how to really get it configured. It's a trial and error process as I'm still fairly new to server operations.

She gets the job done.
OpenVPN, Nginx, Media Center, Backup, Network frontend

Attached: lain_info.png (560x731, 15K)

>ZFS
>XFS
>BtrFS
>Linux
>BSD
>NachOS
snapraid

What do you need a gpu in a server for

>ssh yourserver
>vim bla.whatever
Not sure why you want to do that though.

xfs isn't bad. Had to use it somewhere around 2011 because ext4 couldn't create filesystems larger than 16TB. Would prefer this combo over btrfs to be honest.

Was already really stable a few years ago. Only know about one serious bug that got fixed a day after release. The biggest downside is that you can't just add a disk to your RAIDZ or change the RAIDZ from 1 to 2 and therefore you need to plan your setup accordingly.

>Is ZFS still a meme?
It isn't a meme, it's just problematic.

Can't grow RAIDZ vdevs, generally requires a lot of hardware as compared to other solutions to perform equally, and even then has not really great performance on access latencies and so on.

> Can I rely on RAIDZx or will it still spontaneously implode?
You can rely on RAIDZx as such, yes.

> I'm looking for solutions for storing 30-80TB of data for my dad who shoots video.
Probably just mdadm RAID or snapraid or something, really. At least he can add another drive then, and it runs fast without using a lot of RAM and other hardware.

>Is RaidZ (raid5) feasable for a 4 drive pool?
Yes, but obviously it has 1/4 less capacity than your current JBOD.

> Debian now
I'd just do mdadm RAID6 or RAID5. Or possibly equivalent snapraid.

These are easier to grow later on and have more solid performance.

I nearly forgot, here:
github.com/bun-dev/twitterDL

Haven't tested it on anything but my machine so issues are bound to happen. You need setup a Gmail auth password and of course, Twitter App.

Attached: 1513494837825.jpg (425x437, 24K)

>gmail
Fuck you that's completely pointless to make that gmail only. paste.debian.net/hidden/10416e34/ untested but you should get the point.

Oh right, I should just put server/port in the config to allow for other providers. I haven't setup my own mail server to test that though

updated, aside from using gmail, let me know of any other issues. I think it should be obvious, but you have to create a dummy app in your twitter account to get the consumer token/secrets/etc.

Attached: screenshot.5.png (593x513, 36K)

I have 4x2tb drives for my hp micro

what riad setup is best running on freenas?

when I need more storage can I just swap one drive at a time with a bigger capacity drive and rebuild?

Attached: 2d65329b-3949-41f7-8550-9addc9424138..jpg (300x124, 5K)

>when I need more storage can I just swap one drive at a time with a bigger capacity drive and rebuild?
Not with RAIDZ, the vdevs on ZFS don't grow. AFAIK the only thing you can grow on ZFS is space inefficient mirrored setups (by 2 drives each).

It works with Linux mdadm RAID or snapraid or other solutions, try these.

>space inefficient mirrored setups
What do you mean?

Well, it is a 1 original : 1 other drive mirror copy, half of the drives are used for parity. One drive in a pair can fail, after that the data is at risk.

RAID5/6 etc use 1, 2, ..., n fixed amount of drives to get as many drives of redundancy across the array. Do RAID5 which reserves 1 drive for parity and you can still use 3/4 drives. Or do RAID6 with 10 drives and you can use 8/10 drives.

This is more space efficient, yes.

I see. Mirror still offers the best performance and flexibility though.

You can also add more raidzn vdevs to a pool but you can't add drives to a vdev.

>Mirror still offers the best performance
The difference to a fast RAID5/6 implementation like on mdadm is actually not big. Again, consider using that instead - also for the ability to grow the array.

But RAIDZ isn't all that fast, yes.

> flexibility
Again a ZFS problem. They'll probably eventually allow growing/shrinking RAIDZ vdevs. It was actually a feature someone from the dev team aimed to get in last year or even before, it just didn't happen.

Mdadm and snapraid on the other hand are already flexible, you can grow these arrays and even change raid levels or "raid levels" in SnapRAID's case.

>You can also add more raidzn vdevs to a pool
You can also trivially do this with LVM2 or so many other solutions, but it's only very rarely what you want.

What will happen is that your 4 drive RAIDZ1 array runs out of space. So you want to add one more drive to make it a 5 drive vdev RAIDZ1 array. Which you can't on ZFS.

You don't want to buy another 4-5 drives, do another RAIDZ1 array, and then pool these two arrays together which is the only thing you can do on ZFS RAIDZ.
Even if you actually bought 4-5 extra drives, it is far more likely that you'd then prefer to make a 8-9 drive RAIDZ2 array (so, more redundancy, bigger array) with your existing data.

Currently have a Synology NAS.

Need something to back it up to.

Was thinking about virtualising FreeNAS on ESXi with HBA passthrough.

Is this a bad idea? ZFS would still directly touch the disks but I reckon if something went wrong I wouldn't be able to fix it.

It's always better to run it on bare metal, but if done properly, then you'll be fine

freenas.org/blog/yes-you-can-virtualize-freenas/

>It's always better to run it on bare metal
Thanks for that. It's also really just intended for a backup (and hypervisor for running non critical stuff). I know I should probably have a separate hypervisor and FreeNAS box. Just can't afford both.

I could run a non ZFS filer as a VM I suppose. Kind of want to get some experience with FreeNAS though.

> Was thinking about virtualising FreeNAS on ESXi with HBA passthrough.
It's possible to virtualize, but why?

Just create some account on your Linux/BSD host OS, push backups with borg. Or such.

I hope you guys can help me.

I'm installing Foreman on a fresh CentOS 7 server.
Seemingly it works, but when I go to Provisioning Setup I get three "prerequisites" that i need to fix, before I can proceed.


1. missing registered host foreman.user.local, please ensure it is checking in

2. missing registered smart proxy foreman.user.local, please ensure it is registered.

3. No network interfaces listed in $interfaces fact

Can anyone pls try to help me or point me in the right directing with this?

> Foreman
Never saw anyone else in these threads use that.

Maybe try plain Ansible / Salt / Puppet / Chef, ... not because we're usually discussing these, but because they're documented better on the internet.

>let me know of any other issues
>TwitterDL.py:2:0: E0001: unexpected indent (, line 2) (syntax-error)
>Your code has been rated at -10.93/10
And don't call it gmail_auth it's just the password for the smtp login. And the new if is also bad (even wrong syntax because login requires two arguments) either you need to login with username and password or you don't need to login. And on most smtp servers you need to login.

> Update TwitterDL.py
> Update config.cfg
> Update TwitterDL.py
chris.beams.io/posts/git-commit/

What the fuck are you talking about dude? Why would you need to flash a random firmware?

You buy a SAS HBA and one enclosure for your disks, connect everything together and it's done.
Also not sure why you'd need a breakout cable...

My boot ssd is giving me smart errors that it's about to die within 24h so looks like I'm rebuilding my server at the weekend
Already ordered a 240gb ssd.
It's an old dual xeon machine with 28gb ram and 9 had drives ranging from 500gb to 5tb.
I don't care about backups, nothing on here but media I can get back very easily (photos are backed up to the cloud)
I use docker for most of my services so is there any reason I shouldn't just download the latest LTS Ubuntu and use that? Unraid/freebase/proxmox just seem a little overkill for what I need

> is there any reason I shouldn't just download the latest LTS Ubuntu and use that?
I don't particularly like Ubuntu, but it should work.

> different drive sizes
Setting up snapraid should be one of the easier options, consider that for the storage layer.

snapraid.it/compare

>> is there any reason I shouldn't just download the latest LTS Ubuntu and use that?
>I don't particularly like Ubuntu, but it should work.
Anything else worth looking at? I only used Ubuntu last time because of the amount of guides everywhere and that's not really important now I use docker for most things

I'll take a look at snapraid tonight, thanks

>Anything else worth looking at?
I prefer dnf, emerge and other package managers to apt. Figures you don't want Gentoo, but fedora / centos might be worth a look even if you're not using the "enterprise" additions (web ui and stuff).

>I chose btrfs years ago
RAID1 or RAID56? Hows it turned out for you? Any major problems?

not him but it's a laptop, probably too much hassle to remove the gpu

It's raid1, no problems ever. I hear raid5/6 are still not ready after all these years. But since raid1 can have many drives too and just copies every file twice, it seems enough for my needs (one drive redundancy).

I even replaced one of the drives once, I'd started with a green and a red and eventually went both red. Installed the new red, did the balancing thingamagics to migrate the data over and finally removed the green. My volume now has device ids 1 and 3 which bothers my ocd slightly but ah well.

Attached: butter.png (761x347, 38K)

Lets say I found a site with a price error on a 16TB NAS? What can I do to make sure I get it at the obvious mis-price?

Attached: 4hGOEbZ.jpg (1680x2520, 1.54M)

BTRFS vs ZFS lads?

>And the new if is also bad (even wrong syntax because login requires two arguments) either you need to login with username and password or you don't need to login. And on most smtp servers you need to login.
That was just a placeholder line until I can actually test a non-gmail provider, which is why I commented it as untested. I also need to see how the html/non-html message will look like.

>chris.beams.io/posts/git-commit/
this is just a simple script so I didn't really see it necessary to actually add commit comments

> I hear raid5/6 are still not ready after all these years.
There is really not much motivation to get the filesystem internal btrfs RAID working, because mdadm RAID5/6 works very well and you can just as easily put btrfs on top of that.

Nothing. You can try to order and they'll probably cancel.

Ext4 or xfs, probably? I prefer to delegate some of the extra features to LVM2 and so on.

>mdadm RAID5/6 works very well and you can just as easily put btrfs on top of that.
You can, but as far as I understand it you're then giving up on the data integrity checks btrfs (and zfs) can do. If btrfs has direct access to the drives, it'll be able to detect and correct bitrot, should it occur.

youtube.com/watch?v=pv9smNQ5fG0

>Currently have a Synology NAS.
>
>Need something to back it up to.
Install Hyper Backup from the Package Center, and point it where you want to back up to.

Alternatively, install Docker, set up a Duplicati container, and have it back up to those cloud storage services you never use.

I have two 12TB HDD that I use to store movies, music, and photos mostly. I want to be able to stream movies both to devices linked to my home network, and to devices remotely connected. Given that I have never used Linux but I'm eager to learn, what is the best alternative for me?
Accessing files for file transfer remotely is a must too.

Last thread I mentioned one of my drives was considered faulty by freenas. Well my new HDD came today and it looks like the resilvering process is going well so far. This is the first time I've had to do a replacement, and it was incredibly simple.

I took the faulty drive out and am using it as a 'shit backup', since it still technically works just fine in Windows atleast.

Attached: 0.jpg (466x108, 36K)

No for the checks. Even mdadm RAID5/6 can actually do these checks on a scrub. And don't forget drives do have ECC themselves already.
Maybe btrfs RAID5/6 is theoretically better able to repair detected errors.
If you really want this capability as add-on, you can basically get this with snapraid, par2 and other methods.

Maybe I'll watch that video soon. On a tangentially related note, you might however want to read:
jodybruchon.com/2017/03/07/zfs-wont-save-you-fancy-filesystem-fanatics-need-to-get-a-clue-about-bit-rot-and-raid-5/

Emby or Plex are your simplest options. I recommend Emby, as it's not tied to an 'account', but Plex is more mainstream.

Have a basic networking question about port forwarding and exposing services to the internet.

Setup up at home is basically: Internet/WAN -> Router #1 -> Router #2 -> Server

I port forward the IP of router #2 on router #1, along with the port I want, and then what do I do on router #2? Port forward the IP of the server and the port?
If I had two servers behind router #2 being forwarded on the same port, would there be an issue and some sort of randomness involved?

Probably what the other user said.

Note that the "devices remotely connected" should probably be done over secured connections (with wireguard, VPN, ssh tunnels... some such).

The UPNP DLNA media server things are rather convenient to use, but not usually exactly safe-to-expose-directly-to-the-internet software.

>and then what do I do on router #2
You port forward the local IP of router #2 in router #1, and then port forward the local IP of the server + port in router #2.

>if I had two servers behind router #2 being forwarded on the same port, would there be an issue and some sort of randomness involved?
No, they should have two different local IPs.

IE, server 1 = 192.168.1.2:5000
server 2 = 192.168.1.3:5000

etc, atleast that's how it worked for me.

Attached: screenshot.6.jpg (383x179, 26K)

Also, if you're exposing your servers to the public and are using the external IP, then yeah I don't using the same port for both would work.

Thanks. Probably sounds like a bit of a dumb and straight-forward question when I read it back. Networking just happens to be a bit of a weak point for me and I attempted it before and it didn't appear to work in that I couldn't connected remotely.

Do some ISPs block ports, in that they wouldn't want residential customers serving port like 22 and 80? Was hosting over an LTE connection at the time.

Ever run into the 'unlock the door using the key kept inside the room' situation?

I'm moving everything onto my NAS so that my priceless, irreplacable stuff like passwords.kdbx are stored safely, and backed-up incrementally to Google Drive overnight. Hooray, I've got a local copy and a remote backup.

However ... I'm thinking if something happens, and an evil wizard destroys my home while I'm at work, I'm still shit out of luck: as my password DB is 'securely' encrypted and stored over at Google's house. And I can't unlock the backup without accessing the password DB inside, etc etc.

So for now I'm also syncing my password DB to Google Drive, separate from my big Backups folder, so that I've got a copy I can access from my phone if something were to happen - like anybody else these days my phone's always with me, so I can lose my NAS, or I can lose my phone, but it's unlikely I'll lose both unless a meteor's landed on my house overnight or something.

So, sanity check my plan please?

>Thanks. Probably sounds like a bit of a dumb and straight-forward question when I read it back.
It just so happens I have a similar setup and spent hours solving a port forward issue a few weeks ago. I have an eero connected to my Arris surfboard modem, and my Asus RTAC68U connected to the eero.

>Do some ISPs block ports, in that they wouldn't want residential customers serving port like 22 and 80?
I've heard they do, but you're probably better off just testing and seeing for yourself. I port forwarded 80 just fine from Comcast

If you haven't already, disable UPnP on both routers. It might conflict with manual port forwards. It's also just better security wise to not have that enabled.

Attached: 61Tad4qpnLL._SY300_.jpg (471x300, 5K)

> So, sanity check my plan please?
If in doubt, have another copy of the password db or unlocking key(s) somewhere else. Storing that tiny amount of data isn't hard.

> passwords.kdbx
Not on pass / gopass yet?

You know something? I'm having real difficulty actually setting up one-way sync from NAS to Google Drive. I literally just want Thing A to sync to Thing B, yet I'm having to play around with fucking file filters and everything's zipped up and i'm like, fuck off, this should be the simplest fucking thing. Fuck's sake.

>Not on pass / gopass yet?
I don't know what that is. I've been using Keepass since 2007 and I'm unlikely to change

>gopass
Haven't seen this before and also with KeePass (KeePassXC). What does it do differently/better?

>no for the checks
Yes for the checks, as shown in the video. I see Linux has actually added journaling to the mdadm raid, which is neat. That wasn't there when I started using btrfs. But does mdadm actually do checksumming? Can it detect bitrot in the way btrfs can? I'm not sure.

Side note, I'm not sure bitrot is a common thing or if it has ever even happened to me, but since using btrfs has no downsides for me, why not? I am 100% convinced that bitrot CAN happen, though.

That article reads like a Jow Forums thread and makes me not want to take it very seriously. That guy seems to have written a filesystem of sorts so probably knows what he's talking about, but dog dam is that ever written like a fanboy while accusing others of being fanboys.
So his tl;dr seems to be
>Hard drives already do this
sure, and that's good, but it doesn't hurt to have fs checks too
>the risks of loss are astronomically low
probably true, and that's also good, but it doesn't hurt to have fs checks too
>the computational power used for data integrity checking is 'wasted' because of the above
eh, no biggie. I have all the cpu cycles I could possibly want.
>ZFS is useless for many common data loss scenarios
he talks about stuff like power failures, ram problems etc, which generally don't affect my holiday photos from 10 years ago that mostly just sit on the disk untouched (except by the dreaded bitrot)
>start backing your data up you lazy bastards
I do, it would be stupid not to
>data CRC gimmick doesn’t hold much value for data integrity and it’s only useful for detecting damage, not correcting it and recovering good data
that's simply incorrect, as shown in the video

So in short, he didn't convince me not to run btrfs. And I really like the raid1 model where I can have many disks but any file is just written on two of them. Seems like the perfect level of redundancy for me.

What are the temps on your HDD's like? There must be an easy way to keep everything under 35C during the summer.

OK I've had it. I'm going to keep passwords.kdbx entirely separate from my NAS, and leave it in Google Drive instead. I'm not going to fuck around with extra syncing and janitoring, this way it's in the cloud and it's everywhere for disaster-recovery purposes. Fuck it

like the whole purpose of using a NAS is to gently de-Google myself, and that's fine - a single password database is still safe where it is.

My pictures, docs, and LINUX MOVIES are safely on the little black box in the corner though. I am at peace once again.

i don't understand. why cant you just automatically 7zip the keepass db to google drive, then keep the password to the archive on your phone or something? I feel you're making this way more complex than it needs to be

Because my goal was to have the NAS be the 'everything storage'; the master record. So, the NAS is the live copy, and there's a nightly encrypted backup to cloud storage.

However, the problem is that with this setup, the keys to that cloud storage backup are kept on the NAS - so to unlock the backup in case of a disaster, I'd need the key stored in the backup.

So my solution is to keep the keys to my backup, and everything else of course, in Google Drive. This way there's a permanent live copy available on my phone, instead of along with the backed-up NAS files.

Make sense?

I suppose an alternative would be to simply leave the cloud-storage backups unencrypted - I'm not likely to be the target of GCHQ or GRU or anything - but with the fappening and all, better to be safe than sorry.

Uh, it's not like this is hard to solve. Just have a key for either storage in one or more places?

> with the fappening and all, better to be safe than sorry
With that policy, you'd not be using any cloud storage ever, I imagine.

>Because my goal was to have the NAS be the 'everything storage'; the master record. So, the NAS is the live copy, and there's a nightly encrypted backup to cloud storage.
Yes that's what I'm saying.

> the keys to that cloud storage backup are kept on the NAS - so to unlock the backup in case of a disaster, I'd need the key stored in the backup.
That's why I'm suggesting to keep the encryption keys on your phone, as you most likely keep your phone on you whenever you're out.

If a wizard blows your house you, you just download the latest keypass db off Google drive and transfer the keys from your phone to your new PC. If you want to be extra paranoid, keep the encryption keys on a different site such as MEGA/Dropbox/etc.

But don't keep both the keys AND keypass in a cloud. You're giving Google employees free access to your db.

Anyone use Terraform (or similar automation tools) to do some neat stuff in the cloud?

>tfw I just converted my NTFS data storage partitions to ext4

feelsgoodman.exe

>Just have a key for either storage in one or more places?
That's an option, but then I'd need to think about keeping that key up-to-date, where do I keep it, if the house burns down where's the OTHER copy, etc etc.

>fappening
Imaginary scenario: someone somehow completely pwns Google and has access to everything in everyone's Drive. My rationale is that even if someone has my encrypted Duplicati backups, they're unlikely to be able to break into them. And if they can, well, we're all fucked. Same reason I'm happy to have passwords.kdbx stored in GDrive.

>You're giving Google employees free access to your db.
No, Google (probably) can't break into my keepass database. Like I said above, if they can, we're all fucked.

like I'm not going to figure 'what if someone hacks the gibson and pwns Google' into my home server backup plans, that's absurd.

I'm not comfortable with keeping my unencrypted files on their cloud storage service, but I'll happily keep passwords.kdbx on their cloud storage service, for the ease of access and peace of mind that doing so provides.

>to ext4
the fuck

Also this is veering into infosec discussion lol

One thing I'm genuinely impressed by, with my Synology, is the fact that it can run Docker - it's so useful being able to spin up stuff like qBitTorrent or Handbrake and leave stuff running overnight, without having to leave the big, power-thirsty PC running.

>but then I'd need to think about keeping that key up-to-date
No, why would you need to? It's just one of the possible software or hardware keys that can unlock the passwords database.

That you can't meaningfully keep a safe's key/password code/[...] itself in said safe is nothing terribly new.

> if they can, well, we're all fucked
If one encryption scheme is broken, my password wallets etc. might be wiped and replaced with not broken ones the next day or next weekend. And nobody else has this data yet.

OTOH Google probably has your data in 5 locations, may keep it around for years, and if a breach happens who knows if they or some employees won't just break all passwords.kdbx they can get their hands on. All while not really giving you (m)any contractual guarantees that they won't loose your data.

I guess we got different sensitivities here, though.

At some point you just have to trust that people will do their jobs? I know that Google are an evil amoral sack of shit company and I earnestly wish someone sacks up and gives them a bloody nose, even though I know it'll never ever ever happen. However, at the end of the day I'm also reasonably sure that the stuff I put into Google Drive is safe and secure. I'll still do due diligence with the material I put into Gdrive, but still.

Analogy: my money's safer in the bank than it is in my mattress, and although ITS JUST NUMBERS ON A COMPUTER WHAT IF SOMEONE DELETES THAT NUMBER HMMM I know that's not likely to happen.

OP here. I am trying to figure out. Any ideas would be appreciated. But as user stated it is a laptop I picked up for a good price.

Nothing really, atleast nothing useful from a laptop's GPU. Nearly all server benefits are CPU related.

Most secure and anonymous way to register a domain name?

Would also like to know actually. Guessing bitcoin or something.

I say btrfs for these reasons

>duplicate metadata, checksums even on single disk filesystems
>copy on write: no need for fsck, the filesystem is always in a consistent state
>can add drives and tell it you're raid1 now, or raid0, or raid10 and it just does it, no need to even remount let alone reformat
>the above works even if you have only one disk to begin with, again without even remounting
>metadata and actual data have different levels: in raid0 metadata is still raid1 so when a drive dies the filesystem can still be accessed (though there will be data loss)
>you can even tell it a single drive is raid1, so it will write everything twice (not useful but shows the possibilities)
>can change raid levels on the fly
>snapshots are cheap because of copy on write
>can do a block send to a different system over a network: makes backups potentially much faster than rsync
>has subvolumes, no need to partition drives anymore
>transparent compression
>is "Linux-original", no need to install anything extra

Just don't use raid5/6, those aren't ready.

Why not zfs? It has most if not all of that already
>higher hardware requirements (memory)
>inflexible when it comes to willy-nilly adding disks

>i dont know what HCI is

>i dont know what IT and IR is

>A linux appliance running docker is impessive
god you're easily impressed

also
>not running a failover cluster for uTorrent with two different HCI solutions

> i dont know what HCI is
Marketing wank you only hear in the surroundings of Cisco / VMWare / Windows admins.

That approach has almost nothing to show for and so almost everyone with more stuff to do uses cloud computing setups largely unrelated to hyperconverged infrastructure.

bixnood get the fuck out of here