NAS, storage etc

I'm a data hoarder and I need some tips.
- Currently I have four 4TB WD Reds in an ubuntu mdadm RAID 10, giving me 8TB of storage space.
- I know RAIDs aren't backups etc and I have external HDDs in place to serve as a real backup.
- I've already filled up the 8TB completely and I'm looking to expand my NAS, which in mdadm's case means I have to rebuild it entirely, which I am largely ok with.
- My rebuild I want to be virtually future proofed in terms of storage, I don't want to have to expand it again, and cost is an issue for me.
- I'm also concerned that mdadm may not be the best choice. I'm using a shitty WD blue that's probably near death as my OS drive for ubuntu, and like 8GB of crappy non-ECC ram.

Additional info:
The drives are housed in an old ANTEC 900 chassis, M4A79XTD EVO mobo which has only 7 sata ports.

cont'd

Attached: netgear-nas-drive-ces-2016-575c1fd25f9b58f22ec7cefa[1].jpg (768x582, 71K)

Questions:
- What is the best free option for a software RAID? I'm assuming mdadm is one of the top choices, since I heard that even if the OS drive dies it's just a matter of plugging the RAID drives into another computer with ubuntu installed and it should treat the raid as intact. If there is a better choice (that's also economical) I would like to know.
- Is RAID10 still the best in terms of balance between general speed/redundancy?
- Do I need ECC RAM or cache drives? Keep in mind this is mdadm.
- I'm thinking of adding four more 4TB WD Reds to increase the storage to 16TB of space, but even then I'm concerned it might not be enough. I filled the 8TB readily enough and already have 4TB additional content, which leaves 4TB of free space. But going for 32TB will be much more expensive I think, unless I go for larger drives? How much space do you guys have, and how do you deal with archiving the ever growing amount of content?
- Adding the drives I run into the issue of not having enough harddrive bays (ANTEC 900 has 9 bays, which means if I have 1 slot reserved for the OS drive, the remaining 8 slots should be JUST enough), but I will have to remove my optical drives etc. I'll also have to add PCI-E raid controller card for more SATA slots since there are only 7 in my mobo. I'm wondering if it would be better to just buy a new motherboard and chassis for the specific purpose of the NAS, but very few consumer grade motherboards have that many sata slots anyway (supermicro ordering is too expensive for me and probably not an option), and same goes with the chassis... barring server form factor chassis, most consumer grade NAS enclosures and PC towers don't have enough hard drive bays, unless they're super expensive.

Attached: it__s_a_beautiful_day_by_relhom-d3gzlwq[1].jpg (1131x707, 196K)

cont'd


Anyways I'm looking for an overall solution, so if you guys have a better idea of how I can get 16TB+ space RAID in a not too expensive manner I'd like to hear it.
In terms of:
1. What software RAID to use
2. What RAID type to use
3. Size/number of drives
4. SATA port solutions and enclosure/hard drive bay solutions.

Thanks in advance.

Attached: maxresdefault[1].jpg (1280x720, 273K)

Nice blogpost, if you're a hoarder at the beginning why do you seem to know nothing at the end of it?

Well mdadm was my choice when I researched years ago, but things can change. I've not tried ZFS or Freenas like others but would like the input of people that have tried all of them.
Second of all just because I've done things a certain way with my build doesn't mean there isn't a simpler overall solution, which is why I'm asking.

I'm a BDR engineer. Firstly how do you perform backups? Are they automated? Any offsite? There's definitely a better way to do what you need by the sounds of things.

Also, what sort of data are you storing, and how frequently is it used? Do you have a rough budget, and is power a concern?

Your three main choices are MD, ZFS, and Btrfs. The main deficit of MD RAID is that it doesn't give you checksums or self-healing like ZFS or Btrfs do. ECC RAM is nice if you can get it, but if you can't, don't worry about it. (And disregard the ZFS people being autistic about it).

RAID 10 is generally the fastest, but as a home user you probably don't care about that actually - any RAID layout on vaguely modern hardware will saturate gigabit ethernet. And as a home user the miserable 50% space efficiency is probably a lot more harmful to your budget than it would be to some sysadmin running a datacenter. RAID 10 also isn't the safest - it'll survive the loss of any one drive, but dies if both sides of one mirror fall down at once. The plus side is that its much faster to rebuild than parity RAID. Again, all this is a bit academic for home use.

If you're a hoarder then you're gonna have to drop the "I don't want to have to expand again" idea, unless you're willing to drop serious money on tremendous overkill. If you're on anything even vaguely resembling a budget, you will have to expand again a year or three down the line and should plan for it. Mdadm and Btrfs both make this straightforward, the difficulties in expanding are the chief problems with ZFS for home use.

Are there any good mATX cases that support 4+ hdd and would fit in one of those cube shelves?
Phenom M is almost perfect but has some dumbshit cooling.
Lian Li PC-V354B has a better layout but drive cages are unacceptably thin/flimsy.
I got nothing else right now.

I do backups manually to external harddrives.
I'm storing a lot of movies, music, tv shows, photos, e-books, podcasts, etc. Nothing really vital or work related. It doubles as a home media server so I guess I use it daily.

I'm not too concerned about backing up specifically, just that I'm doing something wrong in my attempt to expanding the capacity of/rebuilding the mdadm RAID10.
For example could I just buy a bunch of SATA cables, SATA power cables, a SATA expansion card, plug everything in and call it a day? Will there be any conflict between the onboard sata ports and the expansion sata ports?
Should I just buy a new enclosure and mobo that will support more drives out of the box, and if so what consumer grade models are there?
Should I get more 4TBs or just start over with larger drives, so I won't need as many drives?
Assuming I just get more drives and end up with 16TB, I still only have 4GB of non-ecc ram so I'm assuming I'll need more... but I'm not sure if it's necessary.

You don't have to answer any of these questions specifically. I'm just wondering if it sounds like I'm doing something wrong. I have 8TB full of content and already at least 4TB of additional content, meaning 16TB might not be enough in the near future. Am I just storing too much unnecessary stuff? Should I just try and cut down and maintain below 16TB if I expand it to that? Should I just go pure JBOD?

Thanks for your post. The thing that I like about mdadm (ext4) is that with Samba installed I can just manually browse/drag and drop the files between my windows computer (ntfs) and the NAS through windows explorer. I'm not sure if ZFS and the other choices let you do that.
I might stick with mdadm for now in that case... I'm not sure about the checksum and self-healing stuff I'll have to read up on it.

Regarding RAID type... it seems like every RAID type is crap. Yea RAID10 has the disadvantages you state (halves storage, worst case scenario tolerates only 1 disk failure), but what put me off RAID6 is I heard that rebuilding is so long and is so taxing on the drives that even though you have 2 disks of parity it's highly likely you will have additional disk failures occur during the rebuild, nullifying the additional disk failure advantage. I'm not sure how correct that is.

Also... expanding an mdadm RAID10 is not that straightforward I think. I'm not sure how correct this is but people say there's no way to grow an mdadm RAID10... meaning if I want to expand the capacity I'll have to just backup everything, delete it, and rebuild it with extra drives. It's not very flexible and another reason I want to ask what the setups people around here have.

what's the measurements of this "cube shelf"?

>Currently I have four 4TB WD Reds in an ubuntu mdadm RAID 10 and external HDD as second backup
Beautiful. Man this is my dream setup. Like I've been dreaming about buying and setting up exactly this for like 2 years now. I can't help or answer your questions but I'm jealous.

>mdadm
>dream setup

~33cm but depth isnt restricted

I cannibalized my old desktop pc to assemble it, so the most expensive part were the hard drives, and they're not so expensive right now.

And I wanted to add that yea the problem you'll run into is that when you exceed the pitiful 8TB (closer to 7.4TB) of storage is the problem I'm facing right now.
It seems there's no elegant and cheap solution if you want a single storage unit with many drives... even most of the popular hotswap enclosures that aren't loud as fuck server racks don't exceed 8 bays... and even then you'd have to worry about a motherboard that supports that many sata ports else you have to buy an expansion card that adds another point of failure and another piece of hardware that might eventually become hard to find/expensive to replace...

>Using WD Red drives

You do realize that the point of Red drives, is that they are designed to fail quickly the moment anything goes wrong with them; "Time Limited Error Recovery".

They make sense for broken hardware RAID cards that can't handle a drive timing out, but they are generally not suitable for anything you care about.

Ok then different drives. Right now I have some 1TB Intel SSDs but nothing like this guy:

Samba is just something that takes files and serves them to the network using the protocol Windows speaks. (Server Message Block, hence the name) Samba doesn't care what exactly those files live on. Ext4 on top of MD, ext4 on top of LVM on top of MD, XFS on top of MD, Btrfs, Btrfs on top of dm-crypt loopback devices, ZFS, ext4 on a zvol pseudo-block device exported from an underlying ZFS filesystem....

the risk of a drive dying during a rebuild is a main reason why people have been using RAID 5 (Z1 in ZFS-land) less in favor of RAID 6. Which hurts space efficiency but does keep the array redundant during a rebuild. How much of a problem this is depends at least in part on what you're doing - if the array is busy with tons of user I/O, you care very much about reducing the amount of time rebuilds take so that they're less intensive, less disruptive, and get you back to full redundancy/performance faster. But that's probably not your situation.

You most definitely can grow (and shrink) MD RAID devices. The problem is that whereas Btrfs and ZFS roll the RAID layer, the volume manager, and the filesystem all into one, MD doesn't. It only does RAID, and knows nothing about what might be on that RAID. You can expand ext4 and several other filesystems though, and of course LVM can handle it. But they are extra steps that you have to do, and do carefully and in the right order.

>another piece of hardware that might eventually become hard to find/expensive to replace...
Not a concern if you're using software RAID, since anything that can talk to the disks will work. I doubt plain SATA PCI-E cards will become hard to find any time soon.

>8tb
>data hoarder
im running a debian machine with snapraid, 16x 8tb seagate ironwolf with 3 drives parity, weekly sync and mergerfs as my backup
i also have 2x 6tb hgst ultrastar in raid 1 that i run virtual machines off of
my setup is some supermicro mobo, 96gb ecc ram, xeon e5-2690 v2, and two sas raid cards flashed to hba mode for my snapraid disks, cant remeber which ones exactly
i like snapraid because it allows me to add drives to my array freely

Attached: 1546092141607.jpg (1024x576, 52K)

Anyway remember 3-2-1 and get a third backup (another HDD if you want) somewhere offsite (encrypt it first) and give it to a family member or something. If you trust cloud services, that could work too but I don't.

all porn or just everything?

Who the fuck is actually downloading and saving (non-child) porn in 2019?

I like to organize it by the location where it was shot, so I need to build up a large enough corpus so that I can train a system to recognize places from the background.

>I like to organize all my porn videos by location so I can train my computer to recognize these places from the background
wat the fuck

damn user thats nifty

Bixnood retard, go away. Do something useful except bragging for epeen points on 4channel.
I can't justify giving away half the disks, so I went with the 6 RAID. It can survive a failure of two drives. anywhere in an array.

anime, hentai, jav and other porn, tv, movies, music, lns, vns, eroge, other games, software, scraped websites, books, and various other things

use RAID6 for more. Consider ZFS if 12+ is on the table, which is likely isn't because that moves into realm of specialized hardware (motherboards mostly have 6-8 SATA ports).

As for RAID10s speed benefit: Don't care. It's a NAS. I'm not going to do live video editing working directly with files on it and I don't have 10 gigabit ethernet anyway.

You are right about mdadm being a huge advantage in terms of hardware failure. If that expensive specialized hardware RAID controller dies after 6 years and it's no longer available you're screwed. Same applies to the PSU on those commercially sold "NAS boxes". If a capacitor on the motherboard you're using for your mdadm setup dies then you'll need ANY other standard computer capable of running GNU/Linux which has enough SATA ports and you're fine.

As for your question about extending your setup down the line: Just build another NAS when your NAS has 8 (or in your case 7 due to mb limit) drives. One of my NAS boxes is a athlon ii x3 with 4 GB DDR3. It's fine. I turn it on, it shows up on the network, I copy files, I turn it off. The cost would be a new PSU and drives. You can get old good enough for a NAS motherboard/cpu/RAM really cheap. Case too, if you go for an old with 80mm fans and enough drive cages. It doesn't have to be "modern", you're not going to have a powerful GPU in there.

on the ECC thing, I personally don't care. if you're using mdadm RAID5 for up to 4 HDDs and RAID6 for 5+ you don't really need ECC RAM. If you're running 20+ HDDs with ZFS then that's a slightly different story.

If I can get cheap old used enterprise-class hardware and one box has ECC RAM and another doesn't and they are about the same price then sure, I'll get the one with ECC RAM. It's nice to have. But I see zero reason to buy expensive new hardware and expensive new ECC RAM to run a home NAS for my movie collection, that's just .. silly.