/hsg/ - Home server general

There hasn't been an hsg in the last few days so here it is.

Attached: hsg.jpg (1600x900, 153K)

i need me one

im making a github replacement on mine.

>deploy gogs
>your done
It's not hard.

im implementing a non bloated one from scratch

so... git?

>not just using gitolite and calling it a day

There was one yesterday you retard

Did any of you ever do something useful and or productive on their servers?

Define productive. I have a cozy system so I can download stuff from anywhere in the world to the server and watch/listen from whatever device I have wherever I am in the world. Is that productive?

Don't mind me, just larping...

1 - Dell PowerEdge R620 (2 total) - 2x Xeon E5 2650v2 / 128GB RAM / 2x 600GB 10K SAS / QDR Infininband / LSI SAS

2 - Dell PowerVault MD3060e - 48x 3TB 7.2K SAS, 4x 800GB SSD SAS

3 - Dell PowerConnect X1018P switch (18 port gig-e managed + PoE)

4 - APU 1D4 - Untangle UTM (soon to be retired)

5 - Cisco SPA-112, soon to be retired

6 - Surfboard 6141, soon to be retured

7 - Dlink 8 port gigabit switch

8 - Dell Latitude 6240 slab - i7 2620 / 16GB RAM / 200GB SSD

9 - 220v step up transformer for MD3060

10 - Dell PowerEdge T410 - 2x Xeon 5660 / 64GB RAM / 6x 4TB SAS / 10x 1TB 7.2K laptop drive / 2x 500GB SSD / 2x 60GB SSD / Perc H700 (4TB's in RAID 5 - Plex library) 2x H200's in IT mode (laptop drives + SSD, tiered storage in Storage Spaces), QDR InfiniBand, Quadro P2000 (for Plex transcoding)

Not seen - 2x Ubiquity UAP-AC-PRO-E-US, HD Homerun prime, or IoT vlan (lights, washer, dryer, for now)

I fell for the lack rack meme, I suppose. But it works well enough. I have 220 in the garage, just need to get it terminated.

Used for Sharepoint, Exchange, Skype for Business, Team Foundation Server, Plex, System Center, and some other testing VM's.
Full Office365 hybrid,(Exchange, SharePoint, Skype for Business), Asterisk, Plex, some other stuff. Does that count?

Attached: IMG_20180607_094855.jpg (1024x1366, 567K)

there's that one guy who made that search engine...

>porn
>useful
>productive
kek

it helps productivity in the end if you find what you're looking for more quickly

this

I like it

Sup /hsg/, I've got a few questions as I'm pretty new to this. I want to start running a server, mainly to act as a cloud service for documents and files and a password database, effectively to replace google drive. I also want to be able to effectively use it as a netflix replacement, so that I can connect to it from my laptop when I'm away from home and watch movies etc (and also as a NAS I guess for when I'm at home). I was planning on using seafile for the cloud part and plex for the movies part, but was wondering what the hardware requirements were for something like this? Ideally as low a power consumption as possible and cheap to buy as well

Does anyone here run a freenas setup? If youre connected wirelessly on both ends what speeds are you getting?

I have nextcloud (so nginx, php), 4x instances of transmission, flexget, emby, tvheadend, dovecot/postfix, ttrss, mariadb, mopidy (mpd for spotify), and nfs shares all running off an Intel(R) Celeron(R) CPU N3150 @ 1.60GHz (4 cores, no SMT) with 8 GB ram, and two 500 gb SSD in RAID0. some pages take a few seconds to load but it's totally usable.

Attached: Untitled.png (675x424, 21K)

That's good to know - just from reading other posts of people with Xeons etc, all I want to do is stream a single 1080p film from my NAS over VPN to me somewhere in the world, and if a Celeron with 8GB ram can happily do that then perfect. Why emby over plex though?

>Why emby over plex though?
I don't remember why I switched. I've used both.
I think it had to do with plex stopping support for their desktop apps unless you paid so I went with nfs for local media (no transcoding) and emby for remote browser based transcoding.

when transcoding I've maxed out this processor.
have you tested wired? or just simple benchmarking i/o tests?

Me too nigga

I plan to test wired, ive got nothing setup currently, i was just wondering if it actually introduced large latency and speed decreases or not

>i was just wondering if it actually introduced large latency and speed decreases or not
yes it will. the more you congest the network the worse it will get. wireless is shit for transferring a lot of data.

But just how shit is my current question. I mean i have an ac1600 router and a 500n wireless card so i cant see the connections being that bad surely.

test it out yourself. it will be shit. connections will fail with large data amounts

If I'm only ever going to watch on my desktop or laptop, do I even need to bother with having a powerful enough cpu to transcode? I was thinking of just getting either an NUC or Odroid, running openvpn, emby and nextcloud on that with a load of drives and that should be ok no?

You will need to transcode if your network can handle the bandwidth of the encoding you want to watch.
Since my home is only 5 megabit upload if I am somewhere else I have to transcode quite a bit to get it to stream. Usually I will instead just download the file and watch it later rather than deal with transcoding.

Good point, hadn't thought of that actually. In that case I'll look at a NUC as they seem to be able to handle a single 1080p stream for transcoding, as the added usefulness of being able to play anywhere vs just on my LAN outweighs the cost of upgrading I imagine

why do server mobos from fucking 2012 still cost over 200$ in ebay? they are just rotting away while the cunt sellers don't realize anyone can make a Ryzen rig that will outperform anything on those trashers and with much less power supply.

Attached: s-l1600.jpg (1000x477, 109K)

Because Enterprises like to maintain their old infrastructure, upgrading the platform is too much hassle, just replacing a Mobo(even paying too much for it) is easier.

And that generally happens only when the mobo breaks.

I picked up a Poweredge 850 for free. Wat do?

>Mobo die
>Replace it for 200$
>Replace the whole system with new hardware that, maybe, is not fully compatible with the rest of our old equipment for 2000$

fill it up with mostly useless VMs until it cries

Maybe I'll fill it with TempleOS VMs for Jow Forums to remote into

are you suggesting enterprises buy G34 motherboards from fucking eBay?

Not big enterprises, but small-medium one, and not from fucking eBay, but from local shop/resellers that deal with this kind of hardware, that are normally expensive and the price don't depreciate a lot over time. The eBay prices are just a reflection.

what's wrong with ebay if you have in house refurbishing/recert?

We bought an old ThinkCenter desktop with a Core 2 Duo last year for 200$ because it was available within an hour's notice and had Windows XP. Keeping old machines running when hardware fails can be a pain. Later we've had to find Siemens S5 hardware, which we had to source through ebay as nobody else has that shit anymore.

>remote into
just hook up a KVM and invite us over :^)

Shit op, where is the fucking pasta? Human garbage kys

nobody reads the pasta anyways

so link the fucking pasta then you absolute mongolian

My thoughts exactly, plus we need more pasta.

>do the job that the nigger wasn't capable to do

That is not how you educate people

Perhaps they don't have the pasta? I don't know what is in it either. It's not that useful.

I want to set up a server but I'm paranoid about security. I was planning on hardening the ssh with key files, no root login etc, but also only being able to access plex and my cloud storage via vpn. Is this feasible, and would it work if I ran openvpn in a separate container to everything else?

>implying he did that and isn't just LARPing

Someone's single

>Someone's single
If you go out, you might meet people.

I know it's scary, user, but I believe in you.

Sup homos bixnood here, you ready to have a Pocky stick shoved down your dickhole while I go mike tyson on your ballsack? Afterwards when the pocky stick is all crushed up you get to try to piss it out. Anyway looks like you got a fine shithole thread here anyways so just keep larping it up as usual. Smell ya later losers!
An

Attached: 1503811157020.png (3360x2100, 831K)

that picture is very old user

here is a more recent pic you can use, and if you're going to larp at least try to do a good job at it, i dont sound like that at all

Attached: Screen Shot 2018-06-12 at 7.30.29 PM.png (3360x2100, 443K)

> He runs vsphere

i'm no expert but isn't the industry moving away from vmware?

No

Attached: Screen Shot 2018-06-12 at 7.47.03 PM.png (1682x1236, 468K)

Attached: Virtualization usage by company size in 2016 Spiceworks.png (624x449, 34K)

a

Attached: vmware-revenue-income-q4-f2018.jpg (640x404, 77K)

A single year snapshot doesn't indicate anything, just that VMware is still dominant, not whether or not its losing share

looks like other is doing prety good
i remain confident in my choices.

i'm impressed you have so much defense of your position ready to go.

Attached: moo.png (640x360, 160K)

>Wired right into the breaker
What in the fuck....? How much power are you pulling?

see , revenue continues to increase. even if they did lose market share in this period it would be of a negligable amount. i'm not sure why you really care. its like asking if intel lost market share to AMD, they did but it was 0.5% so who gives a fuck.

>i'm impressed you have so much defense of your position ready to go.
i just googled hypervisor market share its not like i have this shit preped

>lost market share
lost data center market share

>2018 and not using a server to efficiently cum
How do you even live?

Hi cunt, I'm getting into vSphere recently, running on shit ass DL380 G6s though, like dual X5560's with 44GB of RAM so shitboxes really.
Anyway, I've got 3 of them on ESXi and every time I try fuck around with the VCSA it's a guaranteed bad time. Like it'll work for a bit but I reboot, wait 2 hours and I get all sought of fucked errors on the login screen, broken pipes etc. So is VCSA a pile of shit or is my hardware just too slow? Also is the Windows Server equivalent much better? I'm hoping to run some kubernetes nodes on these fuckers and maybe play around with Windows Server clustering and VDI. Any tips on things like running HA with OS level clustering too? Just turn that shit off and let VMware's DRS handle it? Run both?
Also, what are you doing with all those instances in ?
Two ADs I can understand but why all the DHCP and DNS servers? Are they Win 2016 Nano servers or something?

Attached: ohgodohfuckohgodohfuck.jpg (500x500, 46K)

>Like it'll work for a bit but I reboot, wait 2 hours and I get all sought of fucked errors on the login screen, broken pipes etc. So is VCSA a pile of shit or is my hardware just too slow?
VCSA is a piece of shit but it should be up and running after 2 hours. Do you have it on flash or on HDDs?

>Any tips on things like running HA with OS level clustering too? Just turn that shit off and let VMware's DRS handle it?
I dont have shared storage so i just use Windows failover clusters, or keepalived for my linux VMs.

>Two ADs I can understand but why all the DHCP and DNS servers? Are they Win 2016 Nano servers or something?
Redundancy, not sure why you wouldnt want DHCP or DNS to be redundant. I have 8 DNS servers for a reason, there are 4 groups of redundant pairs.

AD1/2 for internal DNS, they use DNS 1/2 as forwarders.
DNS1/2 are in a isolated PVLAN (as well as just about any internet routable VM) and query the root servers directly
DNS3/4 maintain copys of AD1/2s DNS zones and query the root servers via a VPN connection for torrenting.
DNS5/6 are like 3/4 but for Tor.

They all run Server 2016 Core

Also did you reduce the resources of the VCSA from 4 vCPUs/10GB RAM?

>imagine being so autistic you feel the need to host a Sharepoint forest for your home

Also VMware DRS isnt for HA. And VMware HA protects against hosts dying, not the VMs themselves fucking up.

>he doesnt have a sharepoint cluster either

Attached: Screen Shot 2018-06-12 at 8.10.29 PM.png (3360x2100, 562K)

Oh wow nice man, how're you going for licensing on all those instances? I'm just hoping I'm bored within 180 days on the Windows Servers and I've found vCenter and ESXi 6.5 licenses online for ((free))
And yeah right I'll go ahead with the clustering then, vMotion is fucking glacial between those machines, unfortunately they're just 140GB 10K SAS drives...
Yeah just 4 vCPUs / 16GB RAM

Attached: TCPLUS-0025_2500px.jpg (2500x2500, 3.15M)

>how're you going for licensing on all those instances
KMS emulator for windows, keygen for VMware products, i found a serial for like 2^32 licenses for NSX and Log Insight somewhere.

It should start up in about 15 minutes. Dunno, you can check the console, see what process is taking so much CPU during that 2 hour period, and then grep the logs for that process. Or just kill your old VCSA and try again with a new one. You can do so without data loss if you setup VCSA HA, make the new one primary, and then disable VCSA HA.

Thinking of getting 5x 6TB WD Reds. In a Raid Z2 this would net 15.6TB. Currently sit on 10TB atm. Should be noted that it took me 20 yrs to get to 10TB worth of data. If I got 5 x 8TB Reds the net would be 20.8TB. However I doubt I'd use all 20TB before a drive shat itself in 4 yrs. So it's either get the 6TB and hope I don't use it all before drive failure (movie rips eat a lot of space), or go overboard with the 8TB and waste money cause by the time a drive shat itself there would be around 1 whole drive with no data on it

>1 whole drive with no data on it
you dont understand how RAID works do you?

I do understand it, just saying after you subtract used space vs free space there would be almost one 8TB drive worth of space left

you're overthinking this
just buy what you need at the best price, because when you need more the market will be very different.