Home Server And Network

Can we have a thread about taking control of your home network "infrastructure"?
Do people build their own firewalls and wireless access points?
Is it fun? Is it better or the same? Dos and don'ts?

Attached: 108c[1].jpg (743x768, 180K)

Other urls found in this thread:

ebay.com/itm/Cisco-Nexus-5020-Network-Switch-Fully-Managed-10-Gigabit-DCB-FCoE-N5K-C5020P-BF/123311213798?epid=74128909&hash=item1cb5ebcce6:g:Tx4AAOSwCH9bcwY8:rk:21:pf:0
ebay.com/itm/HP-A5800-24G-Switch-switch-managed-24-ports-4-SFP-Ports/173731642600?hash=item2873367ce8:g:EWcAAOSwn01cNkMb:rk:1:pf:0
kimbrer.com/dell-v60tn.html
twitter.com/SFWRedditImages

yes, pfsense is good for a free home firewall

One specific question I suppose, is wether or not it's a bad idea to use a one port router as a wireless access point as well.
You hear people say it, but is that just for those modem/router/switch/wireless access point combos you ISP sends you, or is that also the case if you turn an old ITX system into a router.

Attached: story507-03[1].jpg (1200x800, 151K)

I use a linux server for that. it has a cheap atheros wireless card in AP mode, iptables for firewall and a bunch of other stuff, like openVPN, apache, r(u)torrent, nextcloud for cloud storage/caldav/carddav, git, muchsync and so on

I think it's fun to tinker with, but it's only "worth it" if you consider it a hobby

Is it just an old machine you're using?
Pic semi related, Kabini supposedly doesn't have PSP, so people are buying old AMD hardware for all their tinfoil needs.

Attached: s-l1600.png (1200x1000, 777K)

openwrt as my main router.
runs dhcp, adblock and unbound.
another padavan powered router upstairs as 5G AC repeater.

my homeserver is just a dualcore Skylake-U.
I run vpn server, crypto nodes and torrents and some VMs on it.

Attached: lenovo-newifi3-d2_case.png (1000x800, 349K)

Good raspi uses for home network?

blocking ads (pi hole), guest AP, wifi splash, collectd collector, any other shit you do on linux etc

Anyone know anything about LibreCMC?

>not building own router/firewall PCB tailored to your own needs...
>not hand solder every component by hand and bga chips in reflow oven..
Plebeians.

Attached: 1352194018245.png (600x600, 258K)

Yeah I have a DNS blocker and firewall running off a Raspberry pi 2

Literally what I did with a one port router, works fine

Not making your own fab

Attached: waf.jpg (2400x3000, 468K)

What NIC to get?

Attached: 2498260[1].jpg (640x480, 43K)

>Do people build their own firewalls and wireless access points?
In the 00's there was m0n0wall and pfsense (still around) communities. They also had galleries where you can see peoples shit that they built and other systems they had......

fuck i feel old.

>Do people build their own firewalls and wireless access points?
I'm not sure this is worth bothering with nowadays.

Routers / APs are pretty powerful off the shelf and run a bunch of sensible Linux distros. Usually, you don't need more than that.

That is a very difficult question. Do you have x4 PCIe slots available? You can get new x1 2.0 PCIe Realtek based gigabit NICs for $10. They have little to no memory and are considered low-end but they are fine for home use. They are the logical choice if you are limited to one or two x1 slots.

I am using 2xNC364T cards which I got for $25 each on E-Bay. These are quad gigabit cards with dual Intel 82571EB processors. They were really expensive back in 2009. Yes, it's old technology. They are PCIe 1.1 x4 cards which means that you need to occupy one (or in my case two) "x16" slots (which aren't x16 but they don't need to be). I am very happy with these.

If you can't find that particular card cheaply then IBM 46Y3512, IBM 49Y4242, SuperMicro AOC-SG-I2, Dell 1P8D1 and Fujitsu D2735 are nice Intel-based enterprise-class NICs you can pick up used pretty cheaply.

If you have a PCIe x4 slot for your NIC then you should absolutely look at used dual and quad Intel-based enterprise NICs. If you don't then you might as well buy a new Realtek-based card.

Get a Mellanox ConnectX-3 56 Gbps Infiniband card.

Attached: nic.jpg (3264x2448, 1.07M)

>56 Gbps

Attached: derp.png (689x479, 544K)

>look at used dual and quad Intel-based enterprise NICs
tyvm. I'll check ebay. There's an empty x16 slot in the board I'm looking at.

>crypto nodes
Care to elaborate?

i made my own VPN once...

It's all in the software.

Use any spare computer with PfSense

While I admire the technology I am not at all convinced this is Home Server technology for two reasons.
a) Switches are not cheap and cheaper used switches are non-existent. I'd love to have 56 Gbps speeds on my LAN but paying $1000+ for a switch isn't that tempting.
b) Mellanox 10+ Gbps cards tend to require a x8 slot which could be a problem if you're (ab)using your old consumer-grade desktop as your new home firewall/server after upgrading the desktop - specially if you want several of these.

There are some really good deals on older Mellanox ConnectX-2 10Gbit cards these days, you can get two of those and a cable and transceivers for around $50. Again, you need a x8 slot (PCIe 2 in this case). If you want 10 gigabit between two machines then that's a good cheap way to do it. The big problem with this is - again: Switches are EXPENSIVE which limits you to connecting two machines unless you're willing to pay a fortune. Also, 10 gigabit Ethernet is available now (but cards do cost $100+ each).

>all in the software
I thought tinfoilfags were all like "It has to be piledriver or earlier!"

Attached: freundschafter.png (1280x720, 46K)

Valid and helpful comment.

That said, yea, in a home network you probably use these to connect machines to each other directly over a short distance. A full >=10GBE network is probably too expensive at this point

If you're just direct connecting to one or two machines where you really want the bandwidth, it's not too bad since you don't need a switch for that. There are plenty of motherboards that can support dual PCIe 3.0 x8 configuration though, for one of these cards and a SAS HBA or a GPU in your desktop, or use just the one PCIe 3.0 x16 slot for a dual FDR infiniband card for connecting to two machines and use chipset connected storage. A single 10 Gbps link isn't quite enough for getting the most out of NVMe or more than a couple of SATA SSDs.

>good deals on older Mellanox ConnectX-2 10Gbit cards
You're not wrong. Are they certain to work as long as you have the PCIe lanes?

lol what the fuck is infiniband?

Attached: 1505049833432.jpg (684x600, 82K)

watch out for fake intel nics though, it's apparently a big business
fucking slanteyes man

Attached: 23413410.jpg (1920x1200, 176K)

Alternative to ethernet. Normally has much lower latency and also natively supports RDMA. You can also run IP over infiniband for regular networking stuff. You can use RDMA with things like SRP / iSER / NVMEoF for block storage instead of regular iSCSI over ethernet so that data is copied directly to memory without hammering the CPU and slowing down transfers, since otherwise copying data into memory from the network card has to go through the CPU which can add a lot of overhead and significantly slow down 10 Gbps / 40 Gbps networking. Some NICs also support RDMA over ethernet (RoCE / iWARP). Also there are implementations of NFS, SMB and v9fs which support RDMA for remote filesystems if that's what you want instead of block storage. Also some AMD / NVIDIA GPUs support RDMA for copying over the network directly into GPU memory.

Sounds nifty

Someone at uni had apparently received a fake loofah from a Chinese ebay seller.
What part of that makes sense? How much more money does that net them?

>NC364T
Is that a Hewlett Packard branded card you're just using in a machine you built?

>Are they certain to work
From eBay? Of course not. The technology itself is sound. These cards tend to come from retired servers and will work just fine if the sellers honest.

Older used enterprise Intel NICs are abundant in both the US and Europe. There's no good reason to buy a NIC from Asia.

I use two of those NICs in my home server. The seller had a ton of HP parts, looked to me like they bought a lot of old HP servers and sold them piece by piece.

I've used a standard PC as a firewall/server/NAS for about two decades now, some always scream omg security when I say that but it's fine. I move my desktop's motherboard/CPU/RAM to the server box when I upgrade my desktop. That's good enough for my purposes.

Attached: hp-nc364t.png (1215x610, 168K)

They seem abundant. Even from credible sellers.

It's very fun and you'll learn a lot.

In my current apartment my ISP won't let me bridge the ISP modem/router so I can't have as much fun as I used to, but I have a homeserver where I ran ESXI with a pfsense VM and it was great.

Have switched to proxmox now and looking forward to moving to another flat.

Does anyone have experience with the SG-1100? Pic related

Attached: sg1100.png (1205x625, 268K)

I'm currently planning on upgrading my current servers to 10GB and use them to host an at home openstack cluster to experiment with.
Probably will get the ubnt 10gb switch and use a single super micro server as a pfsense router and have another one as the control panel.
Also planning on building a large SAN device for dedicate storage for all of the vms and containers.

>I am using 2xNC364T cards which I got for $25 each on E-Bay
Or just not be a faggot and spend the same amount for 10GbE cards

>Switches are EXPENSIVE
They're not, they're dirt cheap, Nexus 5010/5020s if you want a bunch of ports, or HP FlexFabric 5800s if you need 4 or 8

bicoincore monerod geth etc etc

>Nexus 5010/5020s
40 ports for $230 shipped

ebay.com/itm/Cisco-Nexus-5020-Network-Switch-Fully-Managed-10-Gigabit-DCB-FCoE-N5K-C5020P-BF/123311213798?epid=74128909&hash=item1cb5ebcce6:g:Tx4AAOSwCH9bcwY8:rk:21:pf:0

> FlexFabric 5800s
4 ports for under $100 and you can get 4 port line cards for them too
ebay.com/itm/HP-A5800-24G-Switch-switch-managed-24-ports-4-SFP-Ports/173731642600?hash=item2873367ce8:g:EWcAAOSwn01cNkMb:rk:1:pf:0

>Switches are EXPENSIVE

you can get gbit switches for 10$

he is talking about 10gig you tard

That's an adorable AP. Here's what I use.

Attached: Cisco 3800e.png (600x400, 955K)

>repeater

Attached: 1545686981110.jpg (433x469, 50K)

who /r610/ master race here?

>posts stock photos
you have jack shit, let alone a wlc or c9800

Attached: IMG_20190125_2132496.jpg (3996x2664, 2.11M)

>wlc

Attached: Screen Shot 2019-01-25 at 9.35.04 PM.png (3360x2100, 1.38M)

You're fucking adorable.

Attached: like a little babby.png (579x701, 708K)

just team 10 ports of them.
anyway seriously, why is 10GbE at home is damn expensive?

no one cares what your employer owns

In my home lab pleb

>post 3600s
You know we toss throw that non-AC garage away right?

Oh wait you installed that garbage module that runs like trash and overheat. KYS

>i dont understand how LACP works

>why is 10GbE at home is damn expensive?
its not, pic related

>he is too retarded to know what a RM3000AC is
keep on larping you phone jockey

Attached: Screen Shot 2019-01-25 at 9.40.25 PM.png (3360x2100, 1.51M)

>get BTFO
>get upset
Stop posting trash.

>i wish i had enterprise class gear at home
>instead im stuck working 3rd shift helpdesk for $13/hr

its okay.

Attached: Selection_018.png (737x620, 173K)

>not having your own Cisco SDA lab at home
I see it's casual hour on Jow Forums again

Attached: cappng.png (870x226, 42K)

I work for a UTM vendor so I use that as my gateway device. Full DPI with various scanning. It's nice to have as it gives me a lot of control over the network. If I quit, I'll probably install PFSense on the box.

again, no one cares what you have at work, or virl

And considering pic related im betting on that being virl

Attached: Screen Shot 2019-01-25 at 9.52.14 PM.png (2894x664, 153K)

>it must be work or virtual!
>there is no way a network architect has production gear in his home lab!

Attached: 1548206586447.jpg (736x592, 42K)

if it wasnt virl it would show up in cisco coverage checker correctly

Attached: Screen Shot 2019-01-25 at 9.54.52 PM.png (1286x286, 81K)

wish this was a regular general
both home server and home networks interests me

Attached: 1444783200268.png (500x500, 93K)

LMAO

we can make it a general. just dont let the threads die.

stay btfo user

Wasn't there a home server general?

yes but i stopped showing up regularly so it died

There's no reason to run soho tier stuff at home at all unless it's a masturbatory 'fun for the sake of fun' thing for you.

We have the technology to revive it

it doesnt have enough people with actual equipment in there, and i eventually get tired of calling people retards and poorfags who larp that their raspis are servers

well, if you have a small house, yeah, you dont need one.

>I'd love to have 56 Gbps speeds
You won't. You'll be limited by both source and destination read / write speads.

> paying $1000+ for a switch
Mellanox SX6036 - $350USD. If you're running point to point, you don't even need a switch.

I run 56 and 40 gbit infiniband at home as cluster and storage interconnects. For reference, environment is:


Storage server:
1x Dell R720XD (2x Xeon E5-2660 / 64GB / H310 in IT mode / 2x 9207-8e / dual port CX354 / 12x 3TB SAS / Debian 9.6 + OpenMediaVault)
4x Dell MD1200 (12x 3TB SAS)

Compute notes:
2x R720 XD (2x Xeon E5-2660v2 / 256GB / 2x H310 in IT mode / Quadro K5000 / 4x 500GB SATA SSD (MX500) / 12x 4TB SAS / dual port CX354 / Server 2016 DC)
2x R810 (4x Xeon E7-4878 / 256GB / 6x 600GB SAS / H700 / vSphere 6.5)
6x R420 (2x Xeon E5-2450L / 96GB / 4x 4TB SAS / 2x 500GB SSD (m.2 MX500's) / 9207-8i / Debian 9.6 w/ CEPH + OpenStack)

Yeah, my envirnoment is overkill, but I'm less that $2500 into everything, and the R720's are the only thing that stay on 24x7.

Gig-E is fine for most day to day use.

>just team 10 ports of them.
That's pointless because you don't get 10x the bandwidth. The horrible secret behind all the lies, propaganda and excuses when it comes to LACP is that you only get to use ONE link per data-stream which means that I can have two gigabit links between my firewall and my switch two between that switch and my NAS and still only get gigabit speeds. Two devices on the other hand could get one gigabit each - and a total of 2 GB bandwidth. Link aggregation means load-balancing so 10x1GB will only give you a total of 10GB/s if you have ten different things using the link at the same time (in reality there would be less because MAC based load balancing isn't perfect).

Picture somewhat related, I use dual gigabit links between my firewall and my switch and that switch and a NAS and I also have my desktop connected to the firewall with dual gigabit. It doesn't give me double gigabit speeds.

Attached: network-overview.jpg (1920x1080, 359K)

>raspis are servers
rpi + usb drive + SMB share = file server.

May be slow as shit, but it serves files...

I prefer Tinker Boards. One runs OpenHAB for the IoT stuff, on a seperate physical LAN, and the other two are for prod and testing in JMRI.

I'm not going to dedicate even an R220 to JMRI, because it's overkill.

What should I read and study for networking and server basics
preferably if its platform agnostic

its not. i have servers, has servers, if someone has equipment you would find in a datacenter then they would have servers. your raspi will never be a server

Rackmount gear in the home is a literal meme, racks are a huge waste of space and rackmount servers are loud and obnoxious. Why haven't you just bought a Dell VRTX yet?

Attached: StorageReview-Dell-PowerEdge-VRTX-Lab.jpg (850x476, 99K)

>posting stock pictures
>blade servers are quiet
keep on larping user

Google it you sack of shit

blade servers don't have fans, idiot.

the enclosure uses blower fans on the rear of the blade midplane and the chassis mainboard uses standard dell 80mm 2U fans that have reduced PWM loads.

it's a quiet box.

>up to 3+kW power draw
>basically 1U sized blades
>quiet

>standard dell 80mm 2U fans
Also it uses 60mm fans, that draw fucking 18 watts each! And 6 of them in the middle. Over 100 watts just for those fucking middle fans for the disks and PCIe slots. Not even the fans for the blades.

kimbrer.com/dell-v60tn.html

bro i fucking own it, it's quiet.

That lag guy is objectively wrong about how lags work.

With the blower fans and the PSU fans, you're looking at 250w just for those fucking fans. There is a reason why it has 4x 1100 watt PSUs (or more).

>i fucking own it
>but i have to post stock photos over and over again in this thread
cool story bro

anyways im going to sleep, i'll laugh at you more tomorrow morning.

You can use a one port board as a full blown router. You just need a VLAN-capable switch to actually fan out the different networks.

It's not *too* expensive.
I run a point to point 10GbE link between my desktop and storage server. The server is decently specced, with a nice ZFS setup with plenty of RAM for caching (and SSD caching on top of it), so it definitely worth it. Nothing else on my network really needs the speed, so everything else is just normal 1GbE.
If I feel the need to start getting more 10GbE devices, Mikrotik makes some good 10GbE switches that are quiet enough to be appropriate for your room. There's the CRS305-1G-4S+IN which is 4x10g and a 1g port (so you'd probably still use it in tandom with another switch for your 1GbE devices). They're not horribly expensive either, only about $150.

i literally just have an AT&T router and i put together a regular ATX computer with an i3-6100, an nvme m2 drive, four 2TB hard drives and a 760 GTX gpu and I have it attached to a switch for some other peripherals like a rPi for my ARM programming.

I ssh into it, port 2222 is the only open port on my router and it goes to my server.

Not sure why any typical house hold would need some of the insane shit im seeing like having 32-port switches and rackmounts and shit

I originally just wanted to make an ITX computer but its very hard finding a good minimal ITX case that also has a ton of drive bays.

Also: we're all IPV6 in this house. Local Link ftw

Attached: nasa.webm (360x696, 137K)

general topology of my household

Attached: Photoshop_2019-01-25_21-29-49.png (1155x722, 65K)

Attached: Photoshop_2019-01-25_21-32-55.png (1158x718, 65K)

the mellanox cards to have a x8 physical but can be run at x4 pci-e 2.0 without much hassle if it's a single port card.
also you can get a 4 port 10g sfp+ mikrotek switch for £140 these days, still expensive, but achievable for most home labs.

10gb rj45 transceivers are still expensive as fuck though, wish they would come down in price.

Attached: 1528301378157.jpg (4608x3456, 3.64M)

get intel 4 port gbe cards if you are looking for that nic setup, they are a slightly more expensive but fuck me do they just werk. BSD,Loonix,wangblows. no issues.

10g intel nics are still overpriced though

I have the sg-1000 awhile back, cute little device. Is this arm based, does it support the new encryption requirements?

>FCoE
What licenses do these come with, if any? Are they perceptual?

>just team 10 ports of them.
Thanks for the chuckle.

It was only bixnigger in there shitting the threads up so nobody actually cares to discuss the technology.

I'd rather have no /hsg/ than have bixnigger in there.

This can be expanded to every hobby and consumer activity available.

Nah. People regularly say stuff like 'the consumer stuff can't nat fast enough' and other objectively untrue stuff to justify why they do this.
I don't have a problem with people dicking around with this stuff for fun but people pretending it's of some practical motivation annoy me.
>hrm yes, I need asic accelerated packet engines for my 5 computers watching chink cartoons hurr

anything can be a server user, quit being a gatekeeping neckbeard

Attached: 1547832778711.png (4312x4256, 359K)