>plex
nmaggioni.xyz
/hsg/ Home Server General
unraid is paid garbage, you're literally paying for slackware + snapraid
just go for debian or any other stable headless gnu/linux and configure snapraid to your liking
this
I use a Debian host with containers and VMs. ZFS-on-Linux for data and backup arrays, it's easy enough to set up manually on most Linux distros and BSD which is why I never looked at FreeNAS.
So I've been thinking about building a home server for a while now. I'd mainly be using it to store my photography work (10tb+ and growing), backups of my day job (less that a few tb) and storing and streaming media, in up to 4k to at least two devices at once (a few tb again for the storage of that). I've no idea where to start when it comes to the hardware for this, so any advice would be helpful.
I currently have 20x 4TB drives in a 10 + 10 raid z2 configuration, and I am running out of space. So I want to build another server.
Does Threadripper and its motherboards really support ecc ram? I haven't been able to find clear answers. If not I'll go Intel again.
Is 11 + 11 raidz3 a fine setup? I remember reading years ago that it was bad to have zpools that did not have an even number of drives, for some reason. This would have 3 for redundancy per pool, and 8 for storage. Probably 12TB drirves this time.
I want a NAS, should I go appliance of built my own, if so what's the best PC case for maximum HDD enclosure?
>Setup openstack and report back.
Chalenge accepted, what would you want to do with it?
All the Ryzens support ECC, mobo maker has to enable it though. Only a few makers bothered with it on AM4 boards, I imagine they'd be more likely to on TR4. Then again all the TR4 mobos seem to be gamer-targeted, I don't think anyone's done a "workstation" one like you see on the low end of the Xeon market. I'd find a board you like and then go dig up its manual from the makers website, and/or any in-depth reviews with BIOS screenshots that you find.
The stuff about how you have to have a certain number of drives in a vdev is voodoo. Ideally you want them to be the same number of drives though, since ZFS will stripe across all of them. Remember that parity raid will give you the IOPS of one drive per vdev no matter how wide it is.
rule of thumb: build your own if you want (or anticipate wanting) more than four drives for any reason.
no ryzen chip doesnt as far as i know, though only (i think) epyc are actually tested, verified amd guaranteed by amd to fully support it