/hsg/ Home Server General

Home server thread


NAS is how most people get into this. It’s nice have a /comfy/ home for all your data. Streaming your movies/shows around the house and to friends is good feels. Repurpose an old desktop, buy a SBC, or go with cheap used enterprise gear. Lots of options and theres even a flowchart. Ask.

/hsg/ is about learning and expanding your horizons. Know all about NAS? Learn virtualization. Spun up some VMs? Learn about networking by standing up a pfsense box and configuring some vlans. Theres always more to learn and chances to grow. Think you’re godtier already? Setup openstack and report back.

>What software should I run?
install gentoo. Or whatever flavor of *nix is best for the job or most comfy for you. Emby to replace netflix, nextcloud to replace googlel, ampache to replace spotify, the list goes on and on. Look at the awesome selfhosted list and ask.

>Datahoarding ok here?
YES - you are in good company. Shuck those easystores and flash IT mode on your H310. All datahoarding talk welcome.

>Do I need a rack and all that noisey enterprise gear?
No. An old laptop or rpi can be a server if you want.

>Links
github.com/Kickball/awesome-selfhosted
old.reddit.com/r/datahoarder
labgopher.com
reddit.com/r/homelab/wiki/index

Attached: server.png (596x430, 27K)

Other urls found in this thread:

reddit.com/r/homelab/wiki/index
veeam.com/blog/backup-replication-community-edition-features-description.html
kb.isc.org/docs/aa-00373
nmaggioni.xyz/2018/02/09/How-much-does-Plex-know-about-you/
twitter.com/NSFWRedditGif

I've been spinning up VM's, but I need something more permanent.
What's a good starter (specifics pls) for a server for home?

Got a used i5 w/ 8gb ram, is my home server now.
rate

>reddit.com/r/homelab/wiki/index
Some good shit.

bump !

Got a nifty Elitedesk mini for my home server. Works great, not even close to getting bottlenecked by RAM or CPU, and I don't need THAT much storage.

It does get quite a bit hot under Linux though, even after changing the thermal paste. Not sure what to do at this point.

do you goys use selinux

Attached: 77c63230.jpg (1280x720, 76K)

Sufficient to get started

I bought a NUC because I wanted something very low power but strong enough to handle some services. I have a pretty beefy HP server with 384GB of memory and two alright xeons but I have literally no reason to keep it running and it idles around 200-300w

yes

my oldest home server is an i5-2500k with 8gb ram and it works pretty well still

Is Backblaze the best solution for online cloud backup? Unlimited @ $6/mo. What are the drawbacks?

have to use a shitty client to run the jobs

Got three shitboxes on Ubuntu server 18.04

5 core 1090t (because one core is broken)
Athlon ii 620
Fx8320

I set up a MySQL innodb cluster because I have some obsession with clustering shit. Despite taking some time for me to set up, I'm really impressed with the throughput. I measured around 40k records (about 20 string cols with a dates here and there) per second. I'm not sure if that's good or not but I'm pretty impressed

What other shit can be clustered that's fun to play around with on home servers ? Can you cluster tomcat?

Are these cheap ebay sas drives a bad idea?

Attached: 1544956115272.png (1129x412, 188K)

no.

btrfs.jpg

Is there a web interface for the filesystem which allows uploads? Owncloud does this, but it's very restrictive so I'd rather just use the interface.

Using a 1TB seagate baracuda as the storage, how bad is having the server on 24/7 for the drive?

If it's just as data storage and there's no read/write, it's fine.

I use it for Plex sometimes

Then it gets spin cycles sometimes. Leaving it on 24/7 shouldn't be measurably worse than turning it on only when you want plex.

horse shit/10

gb2reddit you faggot

its ran by retards who shuck desktop disks and then resell you the space

No its not a bad idea, although those EMC ones (or any pulled from a SAN) will be formatted 520 byte sectors instead of 512, so you'll have to get them moved back to 512 which is a hassle

Their MTBF is calculated for 6 to 8 hours of run time per day. So they'll last a couple years before shitting themselves.

Also it looks like those are actually SATA disks with a SAS interposer, not real SAS disks.

yo thread
i heard you liek home servers
i made a home server so i can make more servers inside my home server.

All depends what you want to do. What’re these VMs serving up? HP Microservers get you ECC ram in a small package (for based ZFS). If you’re not opposed to rack mountables, checkout some super micro stuff.

It’s great as long as you’re using windows/macOS (no Linux client) and everything you want to backup is local (network shares are not allowed). If that meets your criteria it’s top notch!

Nextcloud is superior imo. I use samba through my vpn for remote access with a GUI. Or a good sftp client works as well

>its ran by retards who shuck desktop disks and then resell you the space
What's the problem with that? Why should I, the consumer, care about what hardware they use as long as they can guarantee my storage remains online and accessible?

>network shares are not allowed
That's weird. My home server is freenas so everything is part of a smb network share. Guess I can't use it

>I dont understand what RV sensors are
>I dont know what UREs are
>I dont understand why enterprise class disks exist

>I dont understand what RV sensors are
>I dont know what UREs are
No actually I don't. What does that have to do with me simply uploading my data to them?

>I dont understand why enterprise class disks exist
I understand why they exist,for enterprises. But I'm just an end user using their storage space. If their shit dies then they better have a mirror copy of my data somewhere.

>they better have a mirror copy of my data somewhere.
Or what? you sue for damages?
Even if it's in the contract if they don't have it and lose it what are you going to do?
If you are using cheap shitty cloud storage it's likely you cannot afford legal fees.

Hes just angry that you don't backup all your Chinese little girl cartoons using the highest premium grade server hardware like he does.

Attached: 1553083730208.jpg (1280x800, 63K)

>Or what? you sue for damages?
Or I write them a bad review and never use their services again, and also encourage others not to use their services which would hurt them in the long run.

Bad press goes a long way, and I'd have full proof of their fuck ups.

>highest premium grade server hardware like he does.
But I actually do use enterprise HDD's in my home server. Refurbished of course ;)

>spoon feed me because im too retarded to read

>i cant afford enterprise class disks

Attached: Screen Shot 2019-03-28 at 6.56.48 PM.png (3360x2100, 1.03M)

How do you prove they lost your data?
The only evidence is that you have no evidence?
Nice.

Veeam is offering a community edition of their software. It's hands down the best backup solution for Windows users, outperforming even paid software. And it's FREE.

veeam.com/blog/backup-replication-community-edition-features-description.html

Attached: meta-banner-veeam.png (1600x800, 26K)

>using the shitty community edition
>not just getting NFR licenses
>not being smart enough to crack it yourself to be a cloud connect provider

Attached: Screen Shot 2019-03-28 at 7.38.52 PM.png (3360x2100, 979K)

Thoughts on X8SIL-F with i5-650 for freenas?
Unlike the Xeons it has AES-NI and UG, and at under $50 for motherboard+CPU I'm not out much money if I want to upgrade to something more powerful or efficient later.

Just buy an X9 board now.

What's the advantage of spending $180 on X9 instead of $40 on X8?

I've been using Macrium's free version without issue - is it worth switching?

not having something which is horseshit, Xeon E5-2600s are cheap. You can get E5-2643 v2s which are 3.5ghz 6 core with 25MB of L3 cache for $120 on ebay. Dual socket boards are cheap too. Compared to your dual core 3.2Ghz with 4MB L3 cache.

It's just serving files, I don't need a powerful CPU.

HP a shit though, even their bios updates are paywalled

put a GPU in it and play vidya while running VMs, pic related has a RTX2080ti and 25 VMs running in the background.

They're easy to get behind. Just open a chat conversation with support and larp that you're austrailian. Aussie consumer protection laws require vendors to give software updates for free. I got new firmware for my tape drive that way. if you look on servethehome forums under the name muhfugen, I posted a link to the document you can direct them to on HPE's site.

Attached: Screen Shot 2019-03-28 at 11.19.05 PM.png (3360x2100, 743K)

anyone ever buy reds off ebay?

Ok so I want to make a build that functions as both as a NAS and a GPU renderer.
How would I go about doing this? I also want this system to be as fast as possible with transferring data through the network.

>add GPU
>add disks
>add 10 Gbit/s card.
>install OS of your choice

What is the max speed of a typical motherboard Ethernet port? I'm looking to transfer files fast from my pc to the NAS and would like to eliminate bottlenecks to the point that only my storage drives (HDD, SSD over sata 3) are the bottle neck

ok thanks just wanted to make sure there was no weird catches

Standard is 1 Gbit/s and you need 10 Gbit/s to reach your goal.

Is it possible to instead use dual Ethernet ports? If the PC motherboard has 2x1Gbe and the NAS also has 2x1Gbe, how would this work on windows pc to the NAS?

What's some cool ass shit I can do with lots of GPU power in a FreeBSD server?

Attached: MV5BMjM5ODg3MTY4OV5BMl5BanBnXkFtZTgwNzE4Mzg0MjE@._V1_.jpg (500x375, 19K)

Depends LACP is not a "just werks" for everything. Most likely it will only use one link per connection so your transfer would be limited to 1 Gbit/s anyway. Apparently there is also SMB multi-channel which should give you more than 1Gbit/s if you have two interfaces. Never used it thought. But even 2 Gbit/s are only 250 MByte/s so for SSDs it would still be a bottleneck.

Because I'm looking to do Raid 1 from a NAS to PC using a single SSD in the NAS and PC.
Also does the speed/amount of RAM and CPU affect the read and write speeds of a HDD/SSD?

I want to pfsense on an old Optiplex with 4x Eth NIC so I can cache my downloads and also have my WiFi physically separate from my wired LAN but I don't want my pfsense machine handling DNS or DHCP, will my dusty RasPi3 work adequately for DNS (pihole) and DHCP?

I recently change my domain name and fucked up my vsftpd. It's the only thing that no longer works.
>change domain name
>regenerate the cert (self signed) and make sure to put the new domain.org in the Common Name when asked, every other field are blank.
>make sure to apply all changes to the .conf file
>try to log in
>connect successful
>trust the new certificate
>type username and password
>error 503 LOGIN FAILED

I modified the conf file to include verbose logs, but I don't have more than this.
Any idea ? what could be wrong ?

>RasPi3 work adequately for DNS (pihole) and DHCP?
As long as your network doesn't have 1000 host in it it's more than enough.

>FTP
Just use SFTP faggot.

btw I'm going to do a Synology DS418 4 Bay, doing Raid 1 from PC to NAS. While being able to connect 2 or more hard drives to back up data from the hard drives within my PC to my NAS. So 1 bay of the NAS will always be occupied with 1 SSD in Raid 1. Would this work?
What I'm worried is that using the standard 1gb ethernet port will slow down the SSD Raid 1 array. While I can by a 10gb ethernet port motherboard and NAS it will be much more expensive.

I'm literally retarded, help me understand software RAID. Since you need an expensive controller that only often goes up to like 8 ports, can you use two controllers and have 16 drives in a hardware RAID5 array for example? Or would I need to run RAID10 in software for that many?

Attached: 1553855278411.png (500x704, 275K)

Don't listen to the angryman. I too use a NAS (in my case ubuntu with ZFS) that I want to backup, so I use crashplan. I highly recommend it - $10/mo and its actually unlimited. I have 6TB up there now and no end in sight. A buddy has 11TB. I restore directories and individual files regularly and it works like a charm. Other options are b2 block storage (from backblaze, but its pay as you go).

Which OS for your first home server?
Thinking ubuntu or CentOS, which one?

>They're easy to get behind. Just open a chat conversation with support and larp that you're austrailian. Aussie consumer protection laws require vendors to give software updates for free. I got new firmware for my tape drive that way. if you look on servethehome forums under the name muhfugen, I posted a link to the document you can direct them to on HPE's site.

omg bixnood you were helpful. When I start these threads I usually ward you off with anime ladies, but now you are redeeming yourself in my mind. I'm still going to post with anime ladies tho.

Why not buy them from bestbuy as easystores? By enough to have redundancy with based ZFS and a couple spares for backup. Theres so much savings by shucking that it more than offsets the reduced warranty you get.

TY Based OP for keeping /hsg/ the comfiest thread on Jow Forums

Attached: hsg.png (399x1058, 631K)

home server noob here again, need advice.
I could also install debian... but I never used debian.

Best to use whatever you're most comfortable with. Most people don't like to experiment with their NAS - they want it to be rock solid and reliable. Just about any linux distro can server samba/nfs. I use ubuntu because its most comfy for me.

thank you friend

Freenas.I spent probably 2 weeks re-reading the official guide/forums/blogs/etc, then another week testing all the functionality in a VM before I decided to use it on my real data.

I transitioned seamlessly without a problem. Works a charm. It's perfect for my use case.

Attached: FreeNAS_11_2_RELEASE_Social_Media_Graphic-compressor.png (1800x1800, 13K)

how do you girls monitor the health of your [headless] home servers? im mainly interested in setting up early warning systems for failing disks

Attached: 91e4c71a8bab74d3d74f309f0cb7bf03a868bd6f.jpg (900x963, 107K)

A bunch of cron job scripts. Freenas does this automatically with S.M.A.R.T scheduling. Netdata also gives you tons of alerts. If you don't use Freenas then it's easy enough to make your own scripts.

I have made a few for giving me the status of my zpools, HDD temps, UPS report, veeam backup report, etc. I think I have around 20 different alert systems.

how bad did i fuck up /hsg/ ?

Attached: 2019-03-29-205323_1920x1080_scrot.png (1920x1080, 649K)

Those old PowerEdge servers are cheap for a reason - they produce a lot of heat, consume a lot of power, and barely crunch numbers faster than a desktop processor from 2012.

i know this sounds like coping but electricity isn't really a problem i plan to underclock it anyway and performance is kinda irrelevant. i just wanted something expandable. mainly bought it to replace a raspberry pi so my expectations arent that high. and at the end of the day it was a good deal

i'm running debian so i guess i'll set up some cron jobs and monitor with smartmontools or something
do you manually check statuses and does your server ping you if any errors occur?

>they produce a lot of heat, consume a lot of power, and barely crunch numbers faster than a desktop processor from 2012.

Are you talking specifically about the 2x E5620's?

>i know this sounds like coping
You bought them for a reason so there's nothing to cope with. Personally I would have bought a pre-owned Threadripper or Ryzen processor and built something from scratch. It would be better value for money I think.
Yes.

It's a small network with maybe 12 devices at one time... I didn't think bandwidth in terms of 100BaseT vs. 1000BaseT was very important for DHCP and DNS services, as long as the latency is low enough. I can ping my Pi from any wired computer in less than 1ms.

I have 2x E5645's, the power consumption is only $8.41/yr. Temps are usually in the

Be aware that the perc 6/i can't handle drives larger than 2.2 TB

Pretty convenient i was planning to get 2TB ones. Thanks for the tip!

Is the Dell PE r310 any good? I'm running a modded PE 1950 III with 16 gigs of ram but it's starting to die and I got a chance to pick up an r310 for cheap locally ($200 for 8 gigs of ram, a decent xeon and no drives)

>pretend to be an aussie
I'ma fucking try this because HPs fucking warranty register is fucking broken and I need msa 2040 firmware

how to create and maintain a folder structure on a nas over time?
i've thought of doing basically just
/mnt/pool
>movies/genre
>series/genre
>anime/genre
>music/genre
>photos/type
>books/type/genre
>manga/genre
>doujins/genre
>hentai/genre
and so on but is their a better way

Attached: 1550604169876.jpg (620x877, 237K)

Disk io could get a problem if you have a LOT of hosts.

>DHCP performance can be limited by disk I/O. Every lease issued by the DHCP server (every DHCP ACK) incurs a write to the dhcpd.leases file.
>kb.isc.org/docs/aa-00373

>8 gigs of ram
Everything below 32GB is absolute shit.

it doesn't matter how you do it. whichever works best for you. you should be using a dedicated program to manage most of that though.

happy panda x for manga/doujin
plex or some other media manager for movies/series
hydrus for photos

etc

I was looking at setting something up for virtualization, I want to run some sort of hypervisor. I've looked at running Qubes as a sort of desktop solution, but I want something else behind some sort of firewall/DMZ for homebrew IOT and other fuckery. I'm acceptable on the software side of what I want to do, but I have zero fucking clue how to figure out what hardware is ideal for this. I'm looking for something that supports hardware passthrough so I can segregate it efficiently, but I don't know what to google or how to evaluate what I'm looking at. Also if GPU passthrough works in an acceptable way it mite b cool, I've thought about trying VR but don't really have an interest in building a computer just for that, if I could piggyback a GPU onto something that already has a shitload of RAM and cores it might be neat.

depending on your needs and ambitions you might get away with nspawn or docker for this purpose
docker is what im using to run dual qbittorrent+openvpn containers for public/private trackers, and a lot of other things

Can't figure out what OS to use for my new home server. I want to have NAS and Plex for sure, with Linux for ssh / vpn / web hosting / etc.

ESXi? Unraid? SmartOS? Any recommendations?

99% chance I put Xen or another type1 hypervisor on it, I've hated working with docker when I've had too, I haven't fucked with nspawn yet but it looks cool. I'm looking for a segregation between parts though, I want to configure what hardware I want to pass through to each VM and keep stuff nice and separated, I'm way behind figuring out what I should look at for motherboards, and what's going on with IOMMU and VT-d though

>plex
nmaggioni.xyz/2018/02/09/How-much-does-Plex-know-about-you/

unraid is paid garbage, you're literally paying for slackware + snapraid
just go for debian or any other stable headless gnu/linux and configure snapraid to your liking

this

I use a Debian host with containers and VMs. ZFS-on-Linux for data and backup arrays, it's easy enough to set up manually on most Linux distros and BSD which is why I never looked at FreeNAS.

So I've been thinking about building a home server for a while now. I'd mainly be using it to store my photography work (10tb+ and growing), backups of my day job (less that a few tb) and storing and streaming media, in up to 4k to at least two devices at once (a few tb again for the storage of that). I've no idea where to start when it comes to the hardware for this, so any advice would be helpful.

I currently have 20x 4TB drives in a 10 + 10 raid z2 configuration, and I am running out of space. So I want to build another server.

Does Threadripper and its motherboards really support ecc ram? I haven't been able to find clear answers. If not I'll go Intel again.

Is 11 + 11 raidz3 a fine setup? I remember reading years ago that it was bad to have zpools that did not have an even number of drives, for some reason. This would have 3 for redundancy per pool, and 8 for storage. Probably 12TB drirves this time.

I want a NAS, should I go appliance of built my own, if so what's the best PC case for maximum HDD enclosure?

>Setup openstack and report back.
Chalenge accepted, what would you want to do with it?

All the Ryzens support ECC, mobo maker has to enable it though. Only a few makers bothered with it on AM4 boards, I imagine they'd be more likely to on TR4. Then again all the TR4 mobos seem to be gamer-targeted, I don't think anyone's done a "workstation" one like you see on the low end of the Xeon market. I'd find a board you like and then go dig up its manual from the makers website, and/or any in-depth reviews with BIOS screenshots that you find.

The stuff about how you have to have a certain number of drives in a vdev is voodoo. Ideally you want them to be the same number of drives though, since ZFS will stripe across all of them. Remember that parity raid will give you the IOPS of one drive per vdev no matter how wide it is.

rule of thumb: build your own if you want (or anticipate wanting) more than four drives for any reason.

no ryzen chip doesnt as far as i know, though only (i think) epyc are actually tested, verified amd guaranteed by amd to fully support it