/hsg/ - Home server general

/hsg/ - Home server general

>Hosting your own DOOM server edition!

--> Quick Questions Quick Replies Why would I want a NAS/Homeserver?
If you ask why then you don't need it.

>I want a NAS/HTPC/Plex what should I get?
RPi3 or Odroid XU4/HC1. Odroid upper models has USB 3 and USB bus separated from the Ethernet one.

>B-But muh ARM
Then check the onboard x86 like J4105B-ITX, J4205B-ITX or J4205-ITX. All of them have SATA and USB 3.

>What's the best [software] for doing [ask]?
Specify you question and elaborate. If you want help put something from your side.

>Which disk is better for my homeserver?
Seems like Green WD are not sold nowadays. So WD Reds are okay for the price if you want "NAS Drvies". Otherwise HGST and Toshiba are your friends.

---> FAQ & Tips Chat

Attached: Screenshot_2018-08-28_12-16-01.png (1501x902, 1.38M)

Other urls found in this thread:

tp-link.com/us/products/details/cat-5582_TL-SG108.html
yubico.com/product/yubikey-neo/
twitter.com/SFWRedditImages

FUCK Why is FreeNAS so difficult? smb server keeps timing out on my FireTV, can't install any plugins because hostname nor servname not provided. Should I just reinstall Windows 10?

Use Arch.

I'm a brainlet when it comes to this shit, user. I need a GUI.

you can do it bro. it's not that hard. just follow the arch installation guide wiki. try it!

I thought configuring up a network would be easier without a GUI.

If I have the time, energy, and motivation to learn Arch networking, I can learn FreeNAS in half the time. I need sometjhing that just werks.

yo dawgs what is a default route?

is a server based on one of them atom board good enough for web dev stuff ?
thinking like a j5005 or maybe one of those i3 things from the 8 series for the ram limit?

consider OpenMediaVault if you want something similar to FreeNAS but simpler.

It's where your router sends traffic when it doesn't know where else to send it.
You generally point it at the next "highest" router in the hierarchy, be it closer to your core or ISP.

>is a server based on one of them atom board good enough for web dev stuff ?
For most web dev stuff it's enough, but it's always hard to tell what huge bloat thing you MIGHT (but not necessarily want to) use.

Yep, one of the current AsRock Intel onboard machines is probably what I'd suggest for that.

Or if you're gonna want the option to launch 20+ containers of your web app, maybe even get a low-end Ryzen.

Well I've only started learning web dev since about 4 weeks now.
Doing a refresh course on html5/css stuff first, then it's either javascript or php/mysql.
Dont really want to clutter my main rig with work stuff so i'd want to make a cheap shitbox that I'd remote login to on the main pc that has all the applications and server stuff needed to do webdev.

i want a riscv homeserver to install fuscia os on

Attached: zoomer.gif (652x562, 626K)

I pretty much think you could just run a VM or container on the main machine.

But having a lower power machine would work for this too, might as well keep the main rig entirely turned off and save power when you web dev and torrent. Also might help you move stuff from what I assume to be a less trustworthy Windows machine to a Linux / BSD one.

If you have an old laptop laying around I would recommend using that since it probably won't cost you much or anything, and it doesn't use much power. That way you can save more money to build a powerful server when you need one.

>tfw my server is finally up to par to run FreeNAS
>Tfw I finally admit I'm just scared of change which is why I haven't made the switch from windows 7

I don't want to fuck anything up. I have windows based mirror arrays, windows remote desktop, and use windows automated backup extensively and I don't know if I'll be capable to set up everything properly in FreeNAS.

Attached: 1527974946496m.jpg (1024x576, 51K)

Don't do it, ebinanon. Don't mess up a good thing.

>That way you can save more money to build a powerful server when you need one.
Saving money isn't too much of an issue for me. I just am not too sure what kind of rig I would need for web dev stuff ?
Lots of ram or cores ?

Yeah but everyone swear their file system is a must have...

Maybe I'll install it on a throwaway machine and try setting up just a single drive before I just dive in head first with setting up mdadm array and automated r-sync backups. I really don't feel like losing terabytes of data.

Why not use Plex or Emby. Makes accessing media easier for brainlets.

Attached: ca.png (586x264, 12K)

I can't install Plex because I keep getting urlopenerror 8.

I've had amazing luck with recovering from multidisk failures in a 3 drive array with FreeNAS. It detected it soon enough to replace and rebuild while only losing 2 files I could live without. Last windows server that happened to lost everything.

Attached: ca.png (1175x208, 31K)

> Saving money isn't too much of an issue for me.
Unlike the other user, I'd personally also not suggest to save money by buying ridiculously powerful & surely more power-consuming servers if you don't need them.

People always forget how powerful even an Atom or smartphone processor already is if you throw it at the "boring" server tasks like downloading torrents or generating websites or managing databases...

>Lots of ram or cores ?
Neither in most cases. Only really fat deployments (e.g. maximum size Gitlab) need like 1-2GB RAM or more. Other stuff is fine with 256 or 512MB RAM, web server AND running the OS.

Basically, again, you could run this stuff in an VM on a not especially prepared home machine as well, I'd mainly recommend it to get away from Windows or have a machine that can run 24/7 without the power consumption of some fatter device.

That's not very descriptive user. If you're wanting help we need deets. Plex will make your life easier.

Docker in a raspi or old laptop is a perfect example of this. My entire docker swarm uses 2G of ram with a webserver, Elastic search sql db, telegraf server and pihole. That's being generous with ram too.

FreeNAS may actually suddenly have more hardware requirements / worse performance because ZFS RAIDZ is bottlenecking and / or you enable any of the bloat features on ZFS.

I'd recommend Linux mdadm RAID5/6 and borgbackup either way.

Yep. Even RPi are powerful enough for most of this.

BTW, have you ever tried how well rkt works with the RPi? I was kinda curious how that one performs on ARM.

Well i'd like a dedicated machine to work on for webdev stuff rather than a vmware image.
Those NUC type things dont cost that much and seemed kind of nice for simple webdev things, I just didn't know if they could support enough ram or had enough cores to work properly during web dev stuff.

ZFS will use as much RAM as it has available, but it doesn't actually need all that much of it. It'll live pretty happily in 4GB or so. Unless you turn on dedup. Don't turn on dedup.

>mdadm RAID5/6
no bit-rot protection

I've thankfully never had an array fail on me. But yeah I've heard windows isn't always the greatest.
Is ZFS that bad? I transfer all the movies I rip and encode from my main machine over the network to the server. So usually about 100GB worth of stuff at a clip as my physical media collection is large. Is ZFS prone to super slow sustained speeds?
>Raid 5/6
Eh. I'd prefer raid 1 with a dedicated backup. Having parity is nice and all but I know from experience that raid 5 is good awfully slow for sustained writes (usually lucky to get 35MB/s with newer model 1TB drives) and their rebuild time is very long if the disks are more than 2TB.

And I was considering just using Linux Ubuntu something or other and doing what you said minus the raid 5 or 6, but is that enough for data integrity? Is ZFS a MUST HAVE? Or is ext4 enough? Is NTFS just that awful?

I'm just gonna use Libreelec. The PC is going to be right next to the TV anyway.

My server that I was planning to run FreeNAS on has a haswell Pentium dual core and 16GB of ddr3-1600MHz ECC inside of a Super micro C224 board. I also have an LSI HBA in IT mode for use with JBOD. How badly will I suffer from speed drops during sustained reads/writes? The most this server will do is host all my movies and music for me and 1 other person to pull from

> ZFS will use as much RAM as it has available, but it doesn't actually need all that much of it.
It needs a lot of extra hardware (RAM, CPU, maybe even a SSD buffer/cache) to perform equally well on even small-ish arrays. And then it still has far less stable latencies.

> no bit-rot protection
RAID6 has basically full bit rot protection in realistic terms if you scrub periodically (bi-weekly / monthly is my suggestion).

Also the filesystem or software [backup software or otherwise] can provide checksums and/or erasure codes. However, bit rot tends to be a minor issue of the order of one bit error in a few hundred TB, and most just ignore it.

Although, they can be heavy in cost, NUCs are amazing devices.
If an RPI or Odroid won't work out in terms of CPU/RAM, the next step is a NUC

you won't. Sequential is easy and if you're storing media that's going to be be almost all of your I/O. Getting gigabit speed for one user out of that can be done with much, much weaker hardware than a dual-core haslel.

Now if you had a dozen or two users all streaming at the same time from a server with a 10G uplink, yeah, then you'd want to start thinking about beefier hardware. But even then the bigger problem would be that more users makes the I/O look less sequential and more random, and HDDs don't have a ton of IOPS no matter what you do. But for your use case none of this stuff is a problem.

>It needs a lot of extra hardware (RAM, CPU, maybe even a SSD buffer/cache) to perform equally well on even small-ish arrays.
see above, for something as un-demanding as streaming anime to one or two people at once, he'll be fine. Even if ZFS has poorer benchmark performance, that poorer performance will still be over twice what he'll ever use.

And I most definitely have seen bit rot, it ain't a theoretical never-happen thing. Anyway the whole point of ZFS is to not get hung up about layering violations like md RAID does. Those checksums in the filesystem only do you much good if the filesystem can say "Hey, RAID layer! This block you gave me is bad, give me the same thing again, but from the other side of the mirror/reconstructed from parity"

>Sequential is easy
Ok this puts my mind at ease. I might try FreeNAS this weekend then. I didn't want to go through this long drawn out hassle of installing a new OS, transferring data from one disk to another so they can be individually formatted to ZFS, setting up software raid, getting backups automated, only to find I'm getting sub 20MB/s transfers due to ZFS overhead.

>Is ZFS that bad?
Comparatively speaking to the far more efficient mdadm RAID5/6, yes. It also scales less well with array size,

> Is ZFS prone to super slow sustained speeds?
Not exactly always: If you throw a lot of hardware at it it performs pretty okay. Which is actually what most ZFS users seem to do.

I just find the hardware difference involved silly.

It's of the order that if mdadm will run RAID6 stable and well on your array size with like 256-512MB RAM, then you're probably throwing 4GB at ZFS for only about equal throughput with less stable IO latencies.

Of course you can find hardware that runs ZFS well enough for media and so on, it's just a lot more hardware.

> I know from experience that raid 5 is good awfully slow for sustained writes (usually lucky to get 35MB/s with newer model 1TB drives)
I don't know what kind of trash RAID solution / hardware you tried, but that's just silly.

Even RAIDZ1 shouldn't be that bad on even a current-ish onboard x86.

Not RAID6 either, which I'd actually recommend to use.

> their rebuild time is very long if the disks are more than 2TB
You should be able to rebuild a 10TB drive in about a day at full speed, or if you also use the array in whatever you set the rate limit to.

> but is that enough for data integrity?
IMO RAID1 or RAID6 with plain old xfs or ext4 is easily enough.

But if you really need to be sure there is not even 1 bit error in 100s of terrabytes of writes / stored data popping up over time, you can add par2 files, checksums, use backup software that automatically adds erasure codes... or yep, use ZFS, typically with RAIDZ1-3.

Have a 2TB media server, but I need to start expanding the storage as well as start backing it up. Are larger size disks more of a risk or should I opt for multiple smaller disks? How much bigger should the backup drive be?

I've got a small home server with 6 core / 4GB ram RK3399, and a mixture of disks I accumulated over the years (256GB SSD, 1TB 2.5 HDD, 2TB 3.5 HDD, might get another larger HDD at some point).

Would it make sense to set up ZFS to manage these disks? I'm quite tempted to do it just for the fun of it.

However, I got a bit stuck on topologies, it seems like ZFS doesn't deal well with uneven disk sizes/speeds?

I'd store different stuff on there, part of it would be quite nice to keep save, other bits I wouldn't care too much if I lost them because one of the disks died.

How would you set up the topology for a system like that?

> Are larger size disks more of a risk or should I opt for multiple smaller disks?
No, not more of a risk per TB.

Risk on computer storage is also just managed with more redundancy (RAID/RAIDZ arrays with redundancy or backups).

Trying to pick the "most reliable" HDD / SSD doesn't really work, information is incomplete, plus drives aren't reliable enough anyhow.

> How much bigger should the backup drive be?
Enough / numerous enough to create an amount of completely independent backups adequate for the importance of the data involved.

Since it's a "media server", I figure one backup and the backup or main copy on RAID5/6 or RAIDZ1/2 is probably safe enough.

Larger disks are becoming a bit risky since transfer speeds haven't increased much since disks were 500 GB.
You'd probably be fine getting NAS grade 3TB drives, and putting them in RAID6. Decent amount of fault tolerance, and allows you to expand as needed.

RAID isn't a replacement for backups, though. You should be OK backing up to a single disk the size of your dataset. 2 mirror backups if you want to be real safe.

If you want to be cool you could get a LTO 6 drive and backup to tape.

> RK3399
Actually I have no experience with how ZFS performs on such an ARM processor.

> Would it make sense to set up ZFS to manage these disks?
Eh, with asymmetric drive sizes, it gets a bit weird to do RAID or RAIDZ.

Figures you might be doing a RAID5/6 or RAIDZ1/2 array over 5x 1TB of which two might be 1TB partitions from 2TB drives, and then another array over 2x 1TB (the leftover TB from the before assumed two 2TB drives).
All partitions involved in an individual array need to be of equal size.

Theoretically this would be the situation where it's more fun to go with snapraid and friends. Or a full cloud filesystem like ceph or gluster if you got a good number of drives.

> it seems like ZFS doesn't deal well with uneven disk sizes/speeds?
It doesnt do well with either.

And mdadm is not doing much better either. It doesn't have some of ZFS' weirdly extreme latencies in this scenario, but ultimately it's still limited by the slowest drive minus what buffers can possibly compensate for.

>How would you set up the topology for a system like that?
Explained above, basically.

Thanks, that fits with how I understood it so far. I might just keep it simple and have a bunch of individual ext4 drives.

snapraid looks interesting, will have a read.

>Larger disks are becoming a bit risky since transfer speeds haven't increased much since disks were 500 GB.
You can still rebuild in only slightly more than a day on the biggest HDD available [which I haven't seen being used in /hsg/].

Even these result in no terribly serious risk at all if you still have an extra backup, and/ or you had RAID6 / RAIDZ2 and that only degraded by one drive.

The bigger fraction of time in a home setting is probably actually anyhow the time until people notice and can actually get the replacement drive. Which will be at least 1 day for most, but even 2 weeks seem possible.

Even with that kind of a time horizon, it is not a terrible risk.
Else we'd probably all be operating storage clouds [that's how you get the failover to trigger within a minute or such, and full redundancy is restored in maybe under one hour]

BTW, it's actually not IMPOSSIBLE to tie a JBOD / RAID0 array of two 1TB drives as one virtual 2TB partition into an array of otherwise 2TB drives.
But of course that changes the real redundancy and latency / performance of the array somewhat, for the worse. You'd generally better avoid doing this except in an emergency. [Just replacing drives is better.]

So yea, if you do an array, probably just keep it straight, do it "regularly" at the size of the smallest real drive, and figure out how you want to use the remaining storage of any larger drives in the next step.

Or yea, try snapraid. Not a super fast solution comparatively, but it'll use your uneven drives more easily.

Should I get the managed version of this switch if its just 5 dollars more? I don't really know much about networking and I don't know anything about vlans but I might in the future...
tp-link.com/us/products/details/cat-5582_TL-SG108.html

Attached: Honeyview_2018-08-28 11_59_20-Refurbished_ TP-Link TL-SG108 Unmanaged 8-Port 10_100_1000 Mbps Deskto (515x352, 25K)

It's the sustained load put on the rest of the array when rebuilding that causes other disks to fail. This is especially true when all the drives are the same type and age.

The easy rule that everyone should keep in mind is:
RAID = Uptime
Backups = Saftey

antergos

> It's the sustained load put on the rest of the array when rebuilding that causes other disks to fail.
Full maymay imagined by people who think drive failures are this predictable.

The slight elevation of the chance that this exactly happens when each of the involved drives is read out to about 1/nth of the array size is a silly as fuck and does not really show up in reality.

Drives don't fail NEARLY as precisely within the same amount of time, else we would literally be able to avoid these failures by having one "canary" drive. Run it a few hundred hours / TB longer before you put it into the array and voila, you now know when it's time to replace the other drives.

But this doesn't work, and neither will most drives fail even in the same year.

>However, bit rot tends to be a minor issue of the order of one bit error in a few hundred TB, and most just ignore it.
Its not, consumer grade disks have 10^14 UREs, enterprise class has 10^15 and rarely 10^16. That works out one bit in every 12.5TB accessed for consumer grade disks.

>b-but muh shitty NAS disks
They just have multiple RV sensors, they're not enterprise class disks with higher URE ratings.

I have 48GB of ram and when I open a folder full of 100k files, it quickly runs out of ram and gives me critical emails about ram usage.

I'm probably going to upgrade to 128gb soon.

I'm using openmediavault instead of freenas because my raid controller doesn't support JBOD, fucking hate the gui desu.

Started building my own.
Currently using OpenMediaVault FTP/Samba + docker plugin w/NextCloud + Syncthing. I'll add more stuff as soon as I learn how to use them.
Specs:
- Athlon Phenom x255 4GB RAM
- 320GB I had for the OS
- 2*4TB EXT4 for Shows and Movies
- 1*2TB EXT4 for Music+Photos from my main PC + backup from phones.

Currently I don't have a good PSU, just a generic one, nor a UPS which ones are good to have? here the electric grid goes down more often than not

How do you guys manage using SSH keys for logins? I must be a brainlet because that shit drives me insane. Its so hard to maintain across multiple devices.
I just want all my devices to be able to access a server and not have to fuck around with using a separate key for each device, but that's seemingly not allowed.

Just use username/passworth authentication instead with the server authenticating against active directory

>but that's seemingly not allowed.
What? You could also use yubico.com/product/yubikey-neo/ to store your key device independent.

You could start saving for a power bank/battery, even if you still gotta shutdown your machines, it'll do them better than hard switch offs.

Attached: 1519583236136.jpg (607x428, 66K)

Of course, that's the idea. a good UPS to hold the router + server. Is there any top tier brand for those?

>Of course, that's the idea. a good UPS to hold the router + server. Is there any top tier brand for those?

Cyberpower is good entry level. I prefer APC myself.

just owns an ixsystem freenas. sips.

r8 my setup
It used to be on an EVGA board, but it wouldn't recognize anything past 1 DIMM per channel, so I had to switch the board to use 48GB.
Also the temps/fans are wrong, I have no idea what the correct lm-sensors configs for this board are. It shows two different mobo temp/fan controllers, one on ISA and one on I2C.

VLANs are pretty useful.

I use ZFS, with a lot of the reason being convenience. It's a filesystem, snapshotting, possibly a backup system (with zfs send/receive), RAID, checksumming, SSD and RAM caching, and LVM-like division of storage all built into one FS with minimal configuration needed. I've used everything from proper HW RAID to mdraid, and nothing compares to the sheer convenience of ZFS.
>It's of the order that if mdadm will run RAID6 stable and well on your array size with like 256-512MB RAM, then you're probably throwing 4GB at ZFS for only about equal throughput with less stable IO latencies
Sure, it's definitely more RAM hungry. But it's not like RAM is particularly expensive, especially if you're fine with getting slightly outdated hardware that takes DDR3.

Attached: file.png (1192x1067, 64K)

>it quickly runs out of ram
What is "it"? The file manager? ZFS?

>white space
What the fuck, I pasted it directly from the snipping tool. How the fuck does it get white space on the edge?

>plug in External HDD with NTFS formatting
>config TransmissionD to run on it
>runs fine
>turn on proFTPD with anonymous plaintext FTP
>the drive becomes inaccessible to Transmission
wtf

stop being a fag and just use IIS's ftp server

Does it have good linux support?

nah i just set up a dodgy sambashare instead

I second APC, dunno really about the American branch, but can't go wrong with Schneider Electric in Europe.

Attached: lain_computer.webm (500x355, 621K)

was lain some type of VR god or something? honestly I wish we had a remake.

reporting in.

Attached: ss.png (910x51, 16K)

Attached: traffic_m.png (500x146, 2K)

>was lain some type of nonsense?
Just watch it, faggot. Yes, it's good.

I already saw it. Why would do you think I'd mention a remake if I hadn't seen it you tool.

What is your hypervisor?

I have an old HP tower that im repurposing to use 3 2tb drives in raidz on debian with zfs, will 8gb of ram be enough for this or would i be better off just using ext4? also should i be ok with using a usb stick for the os or should i be looking at using a smaller size hdd?

>What is "it"?
The box

Larping, Zoomer that doesn't like old drawing style? It's not like your question indicates that you actually saw it.

Attached: 1477301783267.gif (972x1000, 1.09M)

it's very triggering when people don't keep all caps or all lowercase for machine names

Looking for a managed 24-port switch with PoE. Willing to buy used if necessary, $250 budget. Any recommendations?