I've got four 8TB drives. what's the best way to setup my ZFS pool?

I've got four 8TB drives. what's the best way to setup my ZFS pool?

I've 6 folders on my backup drives called Movies, Porn, 2D Porn, Images, Music and Documents. Should I just create one dataset for each of these folders, 6 datasets basically? What's a good setup Jow Forums

Attached: 1497841764552.png (488x292, 62K)

RAIDz1(RAID5 equivalent) and a single dataset.

but why a single dataset and isn't raidz1 too risky with 4 drives? that's just one parity drive

You seem to think that 4drives are much, which they aren't.
Z1 and single set is just fine for you.

what's the reason for one single dataset and raidz1?

>one drive fails out of four drives
>plug in the new drive
>while it's resilvering, another drive fails because of the amount of work it has to do
>entire pool is lost

>8TB drives.

RaidZv2 (raid6 equivalent), on one dataset to make snapshots easier. Or 2 mirrors on one pool, re-silvering RaidZ 8TB drives can take days and it will drive your performance to the ground.

THIS.
Do not fucking use any kind of RAID-5 if you care about your data.

New to RAID in general. How's RAID-10? Good choice?

RAID-10 is the only sane choice considering that HDD are fairly cheap.

Raid levels itself are deprecated, software and hardware alike, today you want a file system that can do raid like things while being able to checksum, snapshot and dedup(if necessary).

2x raid 1 arrays

It really depends, BTRFS, ZFS and COW filesystems are shit for heavy workloads.
Even disabling copy-on-write does not make things much better.

On 4 drives the odds are minuscule. If you're so afraid, at least recommend Op raid 10 to butcher his storage all together.

Attached: 1490941193954.png (683x797, 481K)

Yeah but for heavy workloads you need those checksums because even ECC ram can fail to detect corruption, also ZFS performance depends on how much cache it has, a combined amount of Ram and SSD's can handle a lot of data.

8TB drives are fragile, the have at least 4 platters.

Yeah it seems like RAID 10 would be extremely limiting on your storage. You'd have only half the space you actually bought. Keeping in mind of course that I heard ZFS can run into fragmentation issues on top of that if over 80% full.
Maybe RAID-6 then?

Is Btrfs RAID56 working properly yet?

Attached: 1421698626089.png (756x715, 889K)

Apparently not, but RAID10 works great afaik.
Also I saw some user claiming that they fixed it in a very recent kernel release, but can't confirm.

Spending quite a lot of money on drives and still have to factor the odds of loosing data is retarded.

People are already complaining about RAID10 "eating away" half the storage, they are not going to buy a shitton of RAM or setup caching on SSD.
Using COW filesystems on budget builds is not really doable.
Even RAID0 of NVME SSD runs like shit with ZFS and copy-on-write enabled, phoronix just released an article on the matter, ZFS has 1/3 of the performance compared to XFS or EXT4.

That's the price you have to pay if you care about your data.
You will still have 16 TB of usable space with the added security and performance of RAID10.
If you need more space buy more drives.

Probably not, I would not touch BTRFS RAID5* with a 10 meters pole.
BTRFS in general works quite good if you ask me, still, just like ZFS, performance are sub-optimal to say the least.

>performance are sub-optimal to say the least.
For what OP and a lot of hobbyists are storing, performance isn't much of a concern. They aren't doing heavy random-write database operations, they're hoarding anime. Yeah it's nice for scrubs and resilvers to be fast, but if your highest performance target is "pretty much saturates gigabit ethernet on sequential read" then any filesystem can do that.

why do u have 2D porns?

Is the ARC "Adative Replacemtn Cache" any good in practice?

it's good in proportion to the amount of memory you give it.

RAID10, or mirror in ZFS/BTRFS, is not that good, but it's still better than nothing

Basically, if the two wrong drives fail, you lose your entire pool. This is for ZFS, I'm not sure how BTRFS handles it.

What's wrong with RAID10/mirror

>People are already complaining about RAID10 "eating away" half the storage,
that's because it's a mirror, two 8TB drives in a mirror will give you 8TB of usable data.

The problem is that the amount of disk space you lose access to in a RAID10 is massive compared to a RAID5 or even RAID6. Maybe I'm not seeing something, but even considering the safety aspects, RAID10 seems extremely excessive.

Single dataset, 1 zpool of 2x8TB, mirrored to the other 2x8TB.

4 drives in RAID10 offers a bit more storage than RAID6

why not create one zpool of 2x2 + 2x2?

>Literally a third less space
>What's wrong with RAID10/mirror
Nothing, but we're talking consumer use here. Unless Op has money to burn, I'd rather suggest dropping a quarter of available space than half of it just to avoid the 0.1% probability of everything crashing.

Raid6 would be the same as 10 on just 4drives.

Attached: 1345752383299.jpg (944x719, 56K)

Speaking generally, not just about OP's 4 drive situation

RAID10 is still a bit more storage than RAIDZ2 with 4 drives of equal size

That's pretty much the same for any raid with only 4 hard drives. If 2 fail.. Rip

Try ReiserFS, it will murder your hard drive.

2 way mirror is faster than raidz.

Raid/ZFS; I think you kinda miss the point. Both are geared in general with only keeping the data volume up in time for you to "BACKUP" the contents before the whole volume crashes for good. Long as you keep separate backups then it don't really mater much in the end which one you use. But Raid 5 building time is painfully slow (software). I'd cap it at 9TB max. Even then your talking over a day for build/rebuild. If time ain't a factor then go for Raid 5; the space you gain vs other raid/zfs levels is unmatched. If time is important then Raid 10 would be good or you could run multiple arrays. With multiple arrays your backup time is cut down a lot cause instead of backing up a single massive volume which could take a week, your only talking a day to do.

One big zpool, raidz. Then mountpoint for each data.

Protip: if you havent' used ZFS before, do some testing with virtualbox and create and tinker with some virtual zfs storage. ZFS can be dangerous and result in total dataloss and death.

Yeah. The whole GUI system that FreeNas/Nas4free both use needs a lot of work. Look at the steps involved with creating a volume,assigning permissions/adding accounts, and enabling sharing. Then look at Windows Server. Windows wins hands down. With windows there is no "Zvols", "Data Sets",etc, just volume creation nice and simple

Fuck off microshit blow your windaids trash out your faggot asshole.

Retarded with 8tb drives but whatever. Wait for weeks long rebuild times and data failure from bit errors during that time.

Much better overall. Better rebuild times and much safer than any single parity shit some retards who have never dealt with data loss will recommend. Easy to fix and recover from since it is just a pair of mirrors. Only downside is cost since it takes 50% capacity for the mirror. But if you can afford that I would go with any mirror solution over parity any day.

Fuck no they are not, not with 8tb sata drives. The odds of a bit failure during rebuild are greater than 100%. Single parity is not recommended for sata drives over 600 or so gb because the chance of single bit failure during rebuild reaches 100%. Stop using raid 5.

Attached: raid 0.jpg (4011x2100, 3.73M)

buy 2 more hard drives and do raidz2

>ZFS can be dangerous and result in total dataloss and death.
>death.
I thought only ReiserFS was deadly.

u can do raidz2 with 4 drives

FreeBSD is probably the superior OS for ZFS.

Too bad it is deprecated now with no virtual hugs.

it's always been deprecated

nothing works on BSD because hardware manufacturers hate it

>folders
you have to go back

Depends on how much storage you're willing to give up. I run a 5 x 2TB array in RAIDZ1. Rebuilds will fucking suck, if they succeed at all, but hey: that's what backups are for. But at 4 disks you'll basically halve your available capacity in RAIDZ2. Guess it depends how much space you think you'll need.

I disagree for various reasons, mostly money. RAID5 is perfectly fine and sensible for 3 and 4 drive RAID setups. Two drives for parity doesn't make sense for 3, kind of makes sense for 4 but not really. I do prefer two parity drives but my drive count is five or above.

complete waste, do raidz1, raidz2 on 6 or more

>ZFS
>not XFS

Have you considered HAMMER or HAMMER2 OP?

Attached: 1519193324017.jpg (800x600, 272K)

Well, RAID5 is retarded, and RAID6 with 4 drives is pointless, so I'd say go for RAID10.
RAID10 has the advantage that every rebuild is a simple mirror, while RAID6 requires recomputing of parity which is both CPU-intensive and needlessly spins up the rest of the disks. With 8TB disks the rebuild times will be long as fuck so RAID10 looks like the best option.
As for redundancy, RAID10 will live through 1 disk failure for sure, and has a chance to survive 2 failed disks, which puts it right between RAID5 and 6.
All that being said, RAID is not a backup solution, so make sure you have proper backups ready in case everything goes up in flames.

Raid 6 looks mostly good because it scales better. You can just piling drives into it without really worrying about running out of parity. Up to a limit of course, 20 drive RAID6 is likely a bad idea.

RAID5 with 4 very high capacity drives will never be able to rebuild, I would never ever even consider it as an option.
3 drives is a shit choice to begin with, at that point using RAID5 might be a decent option.
More than 4 drives, split them into multiple RAID volumes.

I'm trying to build a data storage solution for my house (currently have about 6TB of data, although I don't need to back all of it up). I was thinking of buying maybe 8 4TB drives (was gonna buy whatever has the best stats according to Backblaze's quarterly report). I'd like to have as much protection as needed so that there isn't a big chance of some failure wiping out everything or a large chunk of everything, but I'd also like to maximize the amount of space I have. I have an old PC (my current PC, I'm going to be building a new desktop as well around the same time) which I will be using to hold all the drives and so on and operate as a home server. What kind of setup should I do?

raid 1
everything else is a fucking meme and will fuck you up later

This doesn't deserve it's own thread.

UNPOPULAR OPNION:

prioritize your data ie no movies and pron and just hardware mirror NTFS drives and keep half offline.

I have 4 x 1tb online in the desktop and 4 x 1tb laptop usb externals offline in a safe. I can pack up and go overseas in an emergency and have 99% of my data with me.

Everyone priorities are different but for me it was portability - recoverability - file system interoperability and and i split the hardware risk over multiple drives... I back up every few months but as the mirrored 3tb is archived only one drive is really being written. Good strategy. Just delete movies tv and porn - keep music pictures documents and memes

I started this strategy as a noob and now 10 years later it makes sense with the i/o bottleneck and rebuild times of 8tb raid drives.....havent lost any data ina decade despite two 2tb Seagates falling over a month apart.

Admittedly i am running out of space with my hi-res audio habbit and need to purchase two new drives soon.

Actually thats my advice. First rule of government spending: why have one when you can have two at twice the price ?

Attached: hadden.png (600x400, 174K)

you are retarded, NTFS is a piece of shit and enjoy your corrupted data

yes it does, nobody posts in that garbage general

>using BSD cancer
nobody should ever use BSD and let them rot to death