Home file server

I want to get a home file server to hoard data (mostly media and books) and I'm considering what should I use. I've been thinking about FreeNAS, but apparently ZFS corrupts your data unless you are using server-grade ram, and that limits you to server-grade motherboard and that kind of setup gets expensive.
I just want to have something small, quiet and that consumes relatively little electricity.

What is your setup and why?

Attached: freenas.png (644x355, 9K)

The chances of data corruption because of non-ecc ram is present on any filesystem, that's why you have to scrub them once in a while if you can, the nice thing about COW filesystems is that you can detect and repair such corruption which is already a once in a billion chance(unless your ram is actively trying to fuck with you).

Depending on your budget, the chinks on aliexpress have small NAS motherboards with ~14 Sata ports, if you want something more upgradable just build a regular PC with PCI-E SAS cards and SAS to SATA adapters.

Also depending on your use the 1GB of RAM per TB might not apply, specially if you add an SSD as cache.

I can post my own setup if you are interested.

ZFS simply tells you about the corruption that would have happened on any other filesystem, it will repair it automaticly if you have it set up. Use freenas, I typically recomend people use Ubuntu with zfs if they get it but if you're thinking about using zfs you can create your pools on freenas and migrate them to other systems later via import/export methods.

>to scrub them once in a while

What exactly do you mean by "scrubbing"?


>if you want something more upgradable

Not sure what exactly I'd want to upgrade except RAM and more hard drives

>I can post my own setup if you are interested.

Please do

Like everyone else is saying, use FreeNAS unless you're comfortable enough with ZFS to use it on a plain Linux/FreeBSD install.

>unless you're comfortable enough with ZFS to use it on a plain Linux/FreeBSD install.

I'm comfortable with Linux, but FreeNAS just comes with all the convenient stuff like web interface out of the box. I don't want to spend too much time configuring everything from scratch.

>What is your setup and why?
An Opteron 4000 with some server-grade RAM and a server motherboard in a server chassis.

>I don't want to spend too much time configuring everything from scratch.
Then just use linux+zfs, if you actually do know linux and aren't just a webdev then you'll find zfs absolutely amazing. Ubuntu supports it out of the box via "apt install zfsutils", don't boot from zfs until they support it, performance for root devices is horrid unless you are a ZFS dev.

>>I don't want to spend too much time configuring everything from scratch.
>Then just [do it from scratch]
Oof.

Honestly though I wish I'd done this. FreeNAS is kinda vague when it comes to granularity, and it gets annoying. Still, having a proper interface was nice, and constrains me from doing stupid shit that might otherwise harm my data.

I'm also terrified to update FreeNAS since so many versions lost their damn mind on my platform.

Do an 8 drive RAID 0.

Attached: raid.jpg (4011x2100, 3.73M)

>installing zfsutils on ubuntu is doing it from scratch
like.
just.
Have you ever actually used linux or do you just use """"instances""""

That's a joke, right?

Yes.

Scrubbing is the equivalent of chkdsk on windows or fsck on linux, but with better chances at recovering your data.

One of the few possible upgrades you might want over a dedicated NAS motherboard is a faster NIC (10Gb/100Gb) if your network can support it that's it.

My setup is this:
MB : HCIPC M42S-7 HCM19NVR3 (chinkboard made for NAS)
Ram: Kllisre DDR3L 8Gb 1600 sodimm (more chinkshit)
Cache SSD: KingSpec mSATA SSD 64GB(even more chinksit)
PSU: EVGA 500w 80 plus
Disks: 5*2Tb toshiba + 4*4Tb Seagate + 500Gb Hitachi (for Jails and system dataset, Freenas likes to wear out flash drives so I have to install those on spinning rust)
Case: Some /diy/ shit I made with rivets and aluminum.

The motherboard comes with it's own CPU a Celeron J1900 and it also has 2 1Gbps ports to do link aggregation with, it works well as a NAS but don't expect to be able to do anything intensive on the Jails like host VMs and shit, the official plugins are run well enough tho.

>FreeNAS is kinda vague when it comes to granularity, and it gets annoying


Any specific examples?

equivalent is a strong word to use when comparing cow-scrubbing to chkdsk.

Still one of my favorite threads.

Attached: bane.jpg (805x492, 41K)

Both check for data integrity and try to repair damaged blocks, windows is just really shity at it, both the tools and the filesystems.

one walks through meta data to check for data integrity
the other walks through meta data to check for data consistency
Only one checks for that other strong word people throw around, "integrity".

I'm thinking of getting some small motherboard (microATX or something like that) with built-in CPU, 16 gigs of ram, 1 SSD for OS and a pair of WD drives for data.

I just want to have a few terabytes of easily accessible via network storage, nothing too demanding.

It's most the ambiguation between volumes and Vdevs and Zpools. Instead of building a Vdev and appending it to a pool you extend a volume and it does this in the background. It works, but it absolutely isn't clear whether it will make a second RAIDZ and append it, or expand the existing one by [x] many disks.

for pool growth you probably should always be competent enough to know about using zpool add, we need to start telling people this now that raidz vdev growth has been introduced.

Yeah, but I'm talking specific to FreeNAS. At least my version. That stuff is all tucked way behind a curtain. Maybe it's gotten better since 9.10 though.

Is it relevant if I just want to have a network file storage without any specific demands?

When you make a zpool, you intend for it to never die, this means eventually you will want it grown. I've destroyed several zpools because I rescue zfs snaps as a side job and let me tell you, some people have actually asked for a few minutes alone with the server. This is all to say, you will expect to grow it, when you do this info will be highly relevant.

I probably should read some manual on zfs, because I mostly use ext4 in my everyday computing and it just werks, whenever zfs is discussed it feels like rocket science.

You can ssh as root on FreeNAS and use zpool add, using the web interface it's optional, there are some stuff that it wont let you do though, like using the boot pool for anything other than booting.
The official documentation at oracle is very good.

Only if you plan on expanding it at some point. I bought 5 2TB HDDs and added another 5, I plan on adding another stack of 5, at some point. How that gets added is VERY important to data integrity and reliability. A vdev of 15 drives would be a terrible idea in my setup.

What ZFS does is rocket science,
using ZFS is most certainly not, but you need to unlearn to do everything at the beginning and follow simple well defined rules. Some things during creation are important, as with all filesystems, but ZFS has so many features and functions you have to know exactly what you want to pools ashift for example to be and if your platform supports leaving it blanks, which freenas does. Thats why people love freenas, it hides the unlearning from you, but really the ZFSonLinux git wiki is everything anyone need ever know about USING zfs.

You have a link to one of these motherboards?

See and google will take straight to the one I have.

i set up a synology with btrfs on mirrored 10TB for auto backups which also to sync to backblaze, for hosting all my pdfs and movies. A bit of fucking around setting up borg on it but otherwise its been plain sailing since

Attached: IMG_20190904_210545.jpg (3024x4032, 2.85M)

I have rpi4 with two external wd my book 8tb drives, samba for fileshare easy setup and just werks for my purpose.

sounds kind of unreliable

Why so? It's arm sbc designed to run 24/7 with raspbian and drives inside those enclosures are WD red which are also designed for 24/7 operation, it's been running for only like 2 months but so far not even a hitch. I'll admit I did not build this setup for reliability, more like out of convenience of how easy setup was and how cheap the drives were(~150$ for 8tb).

Why don't you use a real server?

Why would I when my setup does everything I want?

NAS RAID 5 10 TERRA
and go fuck yourself

mergerfs + snapraid on debian. I will be expanding storage very soon and this setup makes it very easy with no major reconfiguration or rebuilding. hosts some VMs and a couple services in containers (qbittorrent, sonarr, emby, dns + reverse proxy, etc).

>samba for fileshare
Is it easy with a phone?

Sure, I use x-plore file manager and it lets me connect to it without any problems, streaming video or anything else you'd expect out of nas works just fine.

I bought a Synology box. Tried FreeNAS, OpenFiler, and they were fun to build but all have drawbacks. So far nothing beats the convenience and features of a Synology box.

Exactly my setup, works great

nginx with autoindex

>Hurrr why don't you replace this functional SOC with an 8 watt draw with a 200 watt loud as fuck server rack for zero change in performance

Not that user, but I live in an apartment, and I'm going to set up a similar thing (but for cold storage)
I have a shitbox external 8tb seagate and a raspi, I'm gonna stick raspbian on it, and set WoL and rsync to run maybe once a week (it supports differential backups) and stick all my dotfiles and media and shit on it as a backup. It does literally everything that it needs to do, is very inexpensive, and doesn't sound like a jet engine taking off.
People have very different use cases, acknowledge that and don't respond with "Why didn't you do this completely different totally unrelated thing" every single time.
>be me
>setting up some service
>ask any question ever
>"WhY AReN't yoU UsIng DOCkEr"

fucking right though, the docker fags do this every single time

my_niggy.bat

Can anyone explain why NAS is so expensive? Currently I am putting internal hard drives into one of those 2-bay USB3 docks that you just slide an internal HD into. I use a cloning program to duplicate each one. They are used as file backup.

The 2-bay USB3 interface costs $35 but a 4-bay Synology NAS costs $500? I am a nerd but I'm not sure I want to fuck around with trying to set up my data backup software myself using OpenNAS or whatever is mentioned here.

I just don't understand why multi-bay NAS enclosures are so expensive, and I know a real NAS has its own RAM/etc (Synology)... but it would just be nice to have a NAS but it seems like I could just build another PC for the same price... how do I learn more about the options I have? Thank you for your help

Attached: 1565704218370.jpg (587x440, 75K)