/hsg/ - Home server general

I guess I will tripfag edition.

--> Quick Questions Quick Replies Why would I want a NAS/Homeserver?
If you ask why then you don't need it.

>I want a NAS/HTPC/Plex what should I get?
RPi3 or Odroid XU4/HC1. Odroid upper models has USB 3 and USB bus separated from the Ethernet one.

>B-But muh ARM
Then check the onboard x86 like J4105B-ITX, J4205B-ITX or J4205-ITX. All of them have SATA and USB 3.

>What's the best [software] for doing [ask]?
Specify you question and elaborate. If you want help put something from your side.

>Which disk is better for my homeserver?
The general opinion minus some details are that WD Greens are enough if you deactivate parkdrive, and WD Red are Green overpriced. Also Toshiba and HGST are pretty good.

---> FAQ & Tips News tomshardware.com/news/western-digital-hdd-malaysia,37473.html
Intel Sends In A Final Batch Of DRM Feature Updates Targeting Linux 4.19
>phoronix.com/scan.php?page=news_item&px=Intel-DRM-Linux-4.19-Final
Updated Debian 9: 9.5 released
>debian.org/News/2018/20180714

---> Old Thread

Attached: wallhaven-416681.jpg (1732x1155, 365K)

Other urls found in this thread:

smarthomebeginner.com/docker-home-media-server-2018-basic/#Basic_Docker_and_Docker_Compose_Primer
ebay.com/itm/Dell-R720-8-Bay-SFF-Server-x2-2-20GHz-E5-2660-16-Cores-32GB-H310-SPS-4-Trays/192601504132
friendlyarm.com/index.php?route=product/product&path=85&product_id=222
friendlyarm.com/index.php?route=product/product&product_id=180
twitter.com/NSFWRedditImage

Bumping with changelog:

>News Updated

We're trying to improve both pastas under the "FAQ" section with buyers guide of some sort. Whoever feels like is free to do it and post it here so we can check on it.

So, if I wanted to do 10Gbe P2P, all I need is a 10GB NIC( MELLANOX CONNECTX-2) and a DAC(10Gbe SFP)? I'm only accessing my storage from a single device(my desktop), so this should be ideal.

I'm starting to realize 1Gbe is going to be a bottleneck with my stripped mirror raid.

Attached: s-l1600.jpg (1600x1600, 123K)

Its SFP+ user, but yeah thats all you need. Dont put it on your default netowork. If youre running a 192.x.x.x use a serperate one like 10.x.x.1 and .2 for the storage link. Also the 1GB eth will be a limit for just about any reg ssd or spinning HD over 1TB anyway.

Maybe it should be mentioned in the OP or something that the first thing you do is replace your NIC since that seems like a pretty important thing.

>If youre running a 192.x.x.x use a serperate one like 10.x.x.1 and .2 for the storage link
For what purpose? I watched a video and the guy didn't have any issue assigning it within the 192 range.

Attached: 10gb Peer to Peer With Your FreeNAS Server (1080p_60fps_H264-128kbit_AAC) (0001).png (1920x1080, 1.05M)

If youre using it as a point to point network than my way is better performance wise. I presume your using your normal 1gb network card to connect to a switch& router? If so than my way is better. If you've got a switch and nas/san with 10gb sfp+ ports than just connect em up as usual on 192.x.x.x

>Tripfag
>Comfy
Please kill yourself, OP

Can i install pfsense on raspberrypi o use a pc with 2 NICs?

Can someone suggest some servers from EBay, an user here suggested gettin a bunch of Enterprise hdd and a server like R something something

So I'm trying to install plex media server on a headless server using the guide here:

smarthomebeginner.com/docker-home-media-server-2018-basic/#Basic_Docker_and_Docker_Compose_Primer

I've set up and secured SSH, setup portainer, docker, watchtower, but with plex when trying to connect by the IP:port it can't find the server. I've tried swapping the network mode to bridge and host but that hasn't helped. Any suggestions?

Can you curl it from the same machine with container ?
If yes then check if it's running on 0.0.0.0
If no then check if you've used -p

PowerEdge R710. Cheap and robust.

Thanks, but I realised it was because I spun the container up after the 4 minute token had expired so the server never had an account basically. Thanks for the help though!

Ahh, Okay. No problem Dude.

If you are doing a short run, you can get away with the cable like in your pic, but if you want to do a run longer than 1m I would highly suggest getting some MMF SFP+ and some fiber cables.
>For what purpose? I watched a video and the guy didn't have any issue assigning it within the 192 range.
It just has to be something that your computer won't try to route out onto your LAN. Assign the address on your NIC to something that is on a different subnet from your normal LAN or anything routable from your normal LAN's gateway. You can use a 192.168.*.* address if the third octet is different than your LAN's if you are using the 24 subnet mask.
I don't think there is a RasPi PfSense port, but there is LEDE for the Pi, which can do firewalling and routing etc. You are not going to get very good performance though because you will have to use a USB to ethernet adapter for one for the ports, and the soldered ethernet port already goes over a USB 2.0 bus, so you will be bottlenecked by that. You might get better performance with something like the Odroid C2, which has USB 3 and a 1GB NIC, but you would be better off with an x86 machine with PCIe.

Attached: servers.jpg (2952x5248, 1.5M)

What rack is that? Would you recommend it?

That user was probably recommending an R510, which is a pretty good option. Make sure to get one with an H3/5/710 RAID card, not a PERC, because the PERC6/i and similar only support 2TB and smaller drives. R510s can support 8 or 12 3.5" HDDs. Mine () only has the 8 drive backplane. It is a nice machine though, mine uses 80w under normal load as well which is decent.
I found it for someone last thread, but I can't seem to track down the amazon link now. It is a decent rack, but pretty much any rack will do the job except one of those crap StarTech things. You are better off just looking on craigslist for a few weeks than ordering something new online.

it looks a lot like the same chink shit Norco rack I have. Was like $150 on Newegg, might be different now. Works okay, but you're right that if there are racks to be had on your local craigslist they'll probably be a better deal. There weren't on mine though, podunk midwestern area, I doubt there's many (or any) businesses around here that have rack servers at all.

>dell 2950
barfo
for the power costs alone you could replace that with a T30 and some RAM and have a usable, quiet, efficient server.

A lot of server racks are whitelabeled and sold under a lot of brands. A lot of times you can get them for free if you call around to recylcers etc. and have a truck to drive them away in.
Where the fuck do you see a 2950 retard? No way I would be caught running that shit from top to bottom that is
>R710
2x L5640
72GB DDR3 ECC
2x 240GB SSD 4x4TB WD RED
>R710
2x X5675
72GB DDR3 ECC
2x240GB SSD 4x4TB WD RED
>HP MSA88
12x 8TB WD RED/ shucked whites
>R510
1x L5640
58GB DDR3 ECC
2x 60GB SSDs
>R720
2x E5 2660
128GB DDR3 ECC
No HDDs (waiting for ebay order, which is why it is powered off)

And loud, and powerhungry.

>53 #
>PowerEdge R710. Cheap and robust.
As a 2U rack mount, what's the fan noise like? Have they gotten quieter over the last decade?

Now that's a helpful post. Thank you user.

Twinax / DAC should be fine up to around 4M
I would recommend sticking to DAC if possible, but if go MM make sure the SFP+'s you get are compatible. Some brands are picky about what they get plugged into.
That being said, there isn't anything really wrong with doing MM unless you have cats. Just makre sure you don't piano wire the cable.

>loud
Everything newer is loud too, they are rack servers. If you want quiet, go tower server or re-purpose a workstation like a Z820.
>power hungry
My semi-loaded R710s use between 90w and 180w from low to high load. R720s and newer idle around 70w and get up to the same 180-200w under load depending on the spec. Thats really only around a $30 difference in power costs a year. Because R720s and Sandy/Ivy/Haswell era stuff is usually around three to four times as much cost wise for only a 10%-15% performance improvement, the R710s are pretty great in terms of price to performance. I sold one of my R720s for $750 and got an R710 for $200 that benchmarks the same.
see above
I have had issues with DACs, especially the really cheap ones like that poster will probably buy when you get over 1m. Might just be chink shit though. SFP compatibility is important though, and that is something I forgot to mention.

>Where the fuck do you see a 2950 retard?
ok fine. they looked like 2950s through the lens of your potato camera.

>they looked like 2950s through the lens of your potato camera.
Maybe if you have never seen a server before. They aren't even the same colour and they don't have the same drive trays.

>fan noise
It's like a regular desktop when idling. At 70% load it's like a small engine. Wouldn't recommend in the living room

Chinkshit is probably the problem, We try and run DAC in datacenters when possible, It's a lower latency connection, and is less susceptible to breaking.
Panduit is a good brand to buy from, but probably expensive.

lol they look nothing alike
what a faggit

Attached: 2950 R710.jpg (795x285, 82K)

is this sarcasm?

>grey rail flaps and drive trays instead of black ones
>square, black LCD panel instead of round grey one
>VGA port is grey instead of blue and on the left of the LCD
>USB ports are on the left of the LCD
>IT FUCKING SAYS R710 INSTEAD OF 2950
way to derail the thread idiots

>look at these tiny details!
overall look is very similar, autist

>IT FUCKING SAYS R710 INSTEAD OF 2950
Can't tell cause your potato cam.
Of course you can tell the difference when I put 2 stock photos side by side.
But trying to figure it out from a blurry photo mostly impinged by your waifu pillow, and just memory, and the fact that Jow Forums has a habit of posting shit tier stuff in 'home server' threads, you can excuse me for mistaking your barely useful R710 for a useless 2950.
So get fucked.

>barely useful
>15000 passmark
>25 active VMs/node with resources to spare for failover
>100w power usage
>barely useful
why are you here?

>ignored the rest of his post
damage control activated

>15000 passmark
>100w power usage
Why are you here? I thought we had rules against niggers.

Attached: why you always lying.png (610x465, 337K)

bixnood fuck off

This is near peak load with 35 VMs because I shut off one of my nodes for maintenance. Passmark might be slightly optimistic, but cool X5675s and X5680s can hit 13500 14000 no problem.

Attached: CramIt_IMG_20180718_10442184920180718_104505.jpg (2952x5248, 1.73M)

R720xd is superior in every way.

It will also cost you three times as much to hit similar performance. R720s are nice machines, and in the next two years they will be the go-to machines for homelabbers I am sure, but they just haven't reached the price threshold where they are worth it yet.

>131w
>less than 15000 passmarks
so you are now admitting you lied.
I guess that's an improvement.

35 VMs. what in the fuck are you doing with all those microscopic 2GB, processor overcommitted VMs?

>so you are now admitting you lied.
Internets r serious biznez.
>35 VMs. what in the fuck are you doing with all those microscopic 2GB, processor overcommitted VMs?
Most of them are small apps and services I host for retards from the wired. I think right now some of the more popular services are an image upload service, public tracker, IRC server, xmpp server, Gopherhole, a really cool BBS/MUD, an internet radio, stuff like that. I get a few thousand unique connections a month, but nothing insane. For sure a waste of power and space, but aren't most hobbies?

>It will also cost you three times
No?

R710 - ~$300
R720 - ~$450

>similar performance
All but the lowest end E5's are faster than the fastest 56xx for media and virtualization.

>haven't reached the price threshold where they are worth it yet.
Quieter, denser, less power (CPU), and take the same RAM the 710's will.

Nope, sorry, please try again...

>Internets r serious biznez.
so do you have a decent pipe for your bizniz?

It looks like there are a few R720s out there for around that price, but I consistently see $100-$200 R710s with dual 56xx procs, where ebay.com/itm/Dell-R720-8-Bay-SFF-Server-x2-2-20GHz-E5-2660-16-Cores-32GB-H310-SPS-4-Trays/192601504132 is the only R720 under $600 that is worth shit. I have an R720 and it uses the same amount of power with dual 2660s as my R710 with dual X5675s, and only benches a bit above it in multithread, pretty much the same in singlethread. Maybe I was off on my timing, but R710s are not useless in 2018.
80/80 with a static from comcast. No data cap either for $60 a month. Not the best, but not awful. I don't have a failover link any more.

Cheers. Looking into this option now.

>Maybe I was off on my timing, but R710s are not useless in 2018.
I never said the R710 was useless. My exact statement was "R720xd is superior in every way"

Insights helpful. Cheers.

>80/80 with a static from comcast. No data cap either for $60 a month. Not the best, but not awful.
so a single dynamic IP? 2Mbps per VM?

Attached: there are levels of survival.jpg (252x200, 11K)

>so a single dynamic IP? 2Mbps per VM?
Are you retarded or illiterate? I said static. I only have a single static IP.
> 2Mbps per VM?
Sure, but that isn't really an issue at all. Some things, even at max load with hundreds of active connections, such as a public tracker, IRC server, or XMPP server only take a few hundred Kb/s at most. Obviously other services such as an SFTP server, image upload service, etc. will take more. Right now I have everything QoSd to around 5Mb/s per VM, and I haven't gotten any complaints, especially since I do it for free :^)

>I said static.
I saw comcast and my vision went blurry from laughter

All ISPs suck, at least I don't have to use comcast consumer aka xfinity, that would really suck. only other option in Denver is CenturyLink that only does stuff over DSL unless you drop $2500 to get a fiber line run, then it is $150 a month for two years. No way I am putting that much commitment when what I have is working pretty good.

>want to get started migrating my python scripts to be used with freenas
>ssh into freenas
>jexec freenas-jail bash
>doesn't work
>n-nani..?
>jls
>no jails running
>god dammit
>have to go into webgui and manually start jail even though i specifically checked that it should auto-start at boot
>jls again
>no jails running
>what?
>remember that i have to actually start the jail, then enter the shell for it to be started for some dumb reason
>jexec freenas-jail bash finally works
>ok time to work
>cd /mnt/freenas/tools/etc
>no directory found
>what the fuck?
>try again
>remember jails cant access anything outside the jail
>have to create alias to the storage path
>cd /alias/freenas/tools/etc
>ok lets do this
>python shit.py
>script doesnt work
>ugh
>test it on windows to make sure it can work with my smb share path atleast
>cd /mnt/freenas/tools/share
>"CMD does not support UNC paths as current directories"
>FUCK
>google how to cd into UNC paths
>need to do some weird shit like pushd /unc/path to create temp virtual drive
>do that
>werks with smb share path
>test in jail
>doesnt work
>forgot that since jails cant access paths outside of jail, it needs to use the alias path
>finally works

God damn is this shit starting to get on my fucking nerves. Sadly I don't think there's a more convenient way to script. It's just this path fuckery thats annoying

I'm trying to use the transmission-openvpn build with Mullvad to download some new stuff, but whenever I try to connect it fails, saying RTNETLINK answers, permission denied. From having a look around it seems like it could be due to trying to use Ipv6, but I can't seem to force not using it as the compose file won't allow it. Any tips or help?

What's a good ultra low power SoC or appliance for a pfSense router that can handle constant OpenVPN client connection and at least 15 Mbps?
2 NICs are plenty, pref onboard but I have access to a PCIe 4x dual Gbe NIC Intel card if I need it.

I'm currently running ASUS RT-N66U with ddwrt and I miss traffic shaping.

Any opinion on RX300 S7?

PC Engines APU.

Thank you

Thank you

Page 8

And page 9

This is the fate of all shit subreddits. Maybe one day you retards will learn.

Learn what? There's simply not enough anons on Jow Forums that are into building their own servers.

Most normal people don't care about virtualization or hosting shit

Too many fucking generals you mongoloid. And lack of participation in your thread doesnt mean theres a lack of people doing shit with servers.

Thanks for bumping my general :^)

>Too many fucking generals you mongoloid
why is that a problem?

>And lack of participation in your thread doesnt mean theres a lack of people doing shit with servers.
it kinda does. pc building threads always go fast, because Jow Forums is mostly full of /v/tards that want gaming rigs. PC building is also significantly easier to get into.

also this ain't my thread.

No worries. Someone has to kick this horse. Least you are not as much of retard than audio generals OP.

Why does FreeNAS recommend such high specs but pre-made NAS boxes have shitty ARM SoCs and 1gb of RAM?

FreeNAS uses ZFS, which requires a good amount of RAM for caching. A lot of NAS boxes use hardware RAID or lighter weight software RAID.

I bet when you were typing out this drivel you thought it would be really interesting, possibly humorous at parts.

Attached: mfw.jpg (462x462, 33K)

No, my rant wasn't meant to be humorous, but you sure as hell felt like responding to it,retardo.

Pre-made NAS boxes generally use Linux mdadm RAID, which is a lot more efficient than ZFS [and more stable in performance, although ZFS's fluctuations do certainly not matter for all uses].

ZFS has a few more features tho. Some of which [deduplication for example] drive up the already high base requirements even higher.

What absolute shit advice
>I want a NAS/HTPC/Plex what should I get?
>RPi3 or Odroid XU4/HC1. Odroid upper models has USB 3 and USB bus separated from the Ethernet one.

Why recommend something without Gigabit ethernet? Why something without SATA?

friendlyarm.com/index.php?route=product/product&path=85&product_id=222

friendlyarm.com/index.php?route=product/product&product_id=180

You're welcome

>Why recommend something without Gigabit ethernet? Why something without SATA?
Wat. The Odroids are in the recommendation.

And the RPi3 is available locally in more places and cheap and despite its flaws still at least fast enough to handle most people's backups and/or stream some music or video without transcoding.

>Odroid XU4
No-one should use a Rpi for a server, not when there are so many better and cheaper options.

> when there are so many better and cheaper options
A lot of places some make individual exports expensive - import processing fees, postal fees, whatever.

We repeatedly had people in these threads that for mainly this reason decided on a Rpi3 over an obviously faster Odroid HC1/2 or such.

I've recently moved my home server to a different machine (newer hardware, AMD FX first gen to Haswell i5) and I've experienced a few kernel panics. I think I've fixed the cause, but for some reason when this happened it caused a network-wide DoS even though routing, DNS, DHCP, firewall etc. are all done on separate hardware. Even with static IPs nothing could connect as long as the machine was connected. The moment I powered it off or disconnected it from the network everything started working again. Anyone ever heard of such a thing?

is there such thing as a FOSS switch?

I'll be glad when they finally roll out with 10GBE Nics that are consumer friendly priced w/regular RJ45 connectors. Bulk data transfers are really hitting a wall anymore with old 1GBE speeds. Nic Teaming is not really practical cause you gotta first have a switch that supports it, then you gotta have two nics in each workstation/server. Last but not least your operating system has to support Teaming and you double the cable requirements. Try to transfer 10+TB of data over 1GBE in one go. It'll take around 50 hrs to do. If you get hit with a power failure or some other shit during that time, your fucked. You can't baby sit it either cause you know you there is that thing called "life" that still goes on. That time only goes up as you expand the storage. Back in the early 00's 1GBE was fast, most shit still used only 100 megabit links. But times have changed.

I doubt they'll start throwing in 10G ethernet ports on consumer-level gear, since consumer-level gear is so starved for PCI-E lanes. HEDT stuff, yeah, I bet it'll be more common, some TR4 boards have it. But without consumer gear having it, the switches aren't gonna get any cheaper. When it starts showing up I bet it'll be as two 10G ports tacked on to a 1G switch.

Is their OP too much of a retard?

Are network switch brands just a meme like everything else in tech or should I actually look for something reputable and if so, what are some "good" brands?

Could easily of been causing a broadcast storm of some type.

What's a cheap and efficient way to create your own media/storage server to stream media to TVs or other devices around the house via WLAN/Ethernet?

Thought about simply buying something like a $200 Synology or QNAP plus adequate 10TB drives.
Alternatively, does it make sense to build your own NAS if you have never done so before and your're new to NAS? Building PCs is no problem for me, it's more about cost effectiveness.

Cheap is just to buy a ODroid HC2 or even a Rpi or such, a HDD.

Plus a Chinese $20-100 HTPC if your TV isn't able to use video streams on its own.

Or if you need six 10tb drives, you can build a low end onboard x86 machine (~$60-120 for the mainboard and cpu, plus maybe 512mb-2gb ram / peripherals / case / psu according to exact needs and taste) use the 4 sata ports that these tend to have with another 2-4 port ($20 or so) pci controller.

Above that, you build a more server /gaming type build for up to 20ish drives.

If you need even more drives, basically get a full 4u rackmount storage server, like that backblaze storage pod v6 (60 drives). Only like $10-15k with cost effective per tb drives, and I figure at that point you probably just have enough storage. Though of course you could even use easily available hardware and software to cluster these up...

i never understood why /hsg/ exists

like.......... you think shoving a raid array into your /bst/ is somehow impressive? like fucking wow you installed a hard drive. good job.

pls spoondfeed a retard such as myself.
what is the cheapest smallest NAS solution that i can use to house 2 mirrored 2TB drives with +30MB/s throughput speed.
i just wanna keep my family pictures safe in a corner and never have to worry about it.

Still a Rock64 or Odroid XU4/HC1 or such.

Yes, the other drive[s] are in USB3+UAS dock / enclosures. Run Linux mdadm RAID over them as you normally would.

for 2 drives if housed in a dual bay enclosure will they appear as separate drives via the USB3 connection or a single one?

I have an R710 with a solid 24 available cores. Planning to run a bunch of server VM's including pfSense. Obviously, if I have a power outage, UPS systems will force everything to shut down. This means getting to the hardware and plugging in a different router. Is there a recommended solution for this? I considered using a laptop as a dedicated pfSense box instead, but that comes with its own caveats. Ideally the dedicated firewall box would also be providing VPN access, so I don't want to depend on anything too weak. Such an edge case, I know, but I'd like to challenge myself to build robust solutions.

What gpus are two U? is the 1080ti 2U in any variant?

just noticed that i can ping every stb/voip on isp's 10/8 network
arent those things supposed to be locked down?

I'm looking to do a 3 or 5 drive raid-z1 or raid5 home nas and torrent box. I'm debating using my old phenom 2 atx system vs buying something quieter, lower power, and smaller. Are there any small SoC that would do the job? Or am I looking at x86 in this territory?

I thought this was going to be a major issue when I virtualized my firewall, but honestly I just set it to autoboot with the server, and I have not had any issues.

I guess there isn't a perfect thread for this, and this one is the most appropriate, so here it goes.

It's long past time I started organizing my shit, so I have to shove all my stuff in a SCM but I don't particularly trust remote hosts these days, especially with source code. So I'd like to host my own little Mercurial server. After researching it, I've been thinking of RhodeCode for the server and TortoiseHg for a client since it seems to interface well with Git, which I'll eventually have to do. Anyone familiar with these can say how well they work?

Additionally, I know that the purpose of SCM is to keep a source-code's history faithful, but is it possible to insert a version between two already committed ones? I ask this because I intend to add all the versions of some Firefox addons, and it's possible I skip one by accident, or find more intermediate versions in the dev's site that aren't on AMO or some shit.

Attached: aXm6232xjU.jpg (650x650, 74K)

Wat

I just want a dedicated NAS machine to store git repos and rsync my shit to. I was looking at a synology unit but apparently those are bad.

>I have to shove all my stuff in a SCM but I don't particularly trust remote hosts these days, especially with source code.
They can only change something without you noticing if they break sha1 and you can also sign commits with pgp.

>Additionally, I know that the purpose of SCM is to keep a source-code's history faithful, but is it possible to insert a version between two already committed ones?
It is possible to move commits with rebase

This is for git but ihm sure it's similar with hg.