Home server general - /hsg/

home server general - /hsg/

Globglogabgalab edition!
youtube.com/watch?v=W1dRBWyf6z8

>Are you interested in learning Linux or BSD administration and configuration better. Becoming a systemd expert? Or maybe you hate that shit and want a cozy little BSD machine to run services on and interact with. Or practice more advanced and complicated networking setups.

>chat
> discord.gg/9vZzCYz
> riot.im/app/#/room/#homeservergeneral:matrix.org

Attached: cad75e024d555ce2484d8f8286a26da6cdb963a77465d2546c81be352e3202ec.jpg (720x960, 84K)

Other urls found in this thread:

blog.ubuntu.com/2015/05/18/lxd-crushes-kvm-in-density-and-speed
workaround.org/ispmail
mailcow.email/
mailinabox.email/
flurdy.com/docs/postfix/
hardkernel.com/main/products/prdt_info.php?g_code=G151505170472
hardkernel.com/main/products/prdt_info.php?g_code=G149142327280
pastebin.com/SXuHp12J
twitter.com/NSFWRedditImage

Proxmox or VMware?

Is there any reason to upgrade from debian8 on my simple home server to something new and shiny if it werks?

VMware is supposed to have a better UI, but doesn't support things like containers and costs money when you go over 8 vCPUs.
I'd run Proxmox for the VMs and the easy usage.

well not at the moment, since Debian has an LTS program now and oldstable is still getting security updates. (You're installing those, right?) But I think that's slated to end some time after Buster gets released next year. And Debian LTS work in general is kind of a "Hey, companies, if you want this, you can do it, or pay to have it done, if you don't, well, upgrade to the current release, you've had a few years", so you should probably start thinking about upgrading.

What do you use it for?
According to the release notes you're missing out on 15346 new packages, but you should still be able to update and install them via apt.
I see no reason do, unless you want to dodge a CVE or something

I've seen this VMUG programme being mentioned a few times. Guess that would be the best option if money wasn't an issue.
I don't really need to run Windows or other operations systems, so would it be worth focusing on LXC on Proxmox? Never used LXC before.

If all you want to run is LXC, installing Proxmox is a bit overkill.
If you can get used to it, the cli programs for LXC, KVM and libvirt is plenty to get started with containers. Fair warning though, I've had some problems with the LXC commands either taking a long time to run or not completing at all, might just be my setup though

Well I would want the odd VM here and there, but most everything would be on Linux. I'm still extremely new to having my own homelab setup so I'd have the flexibility to go down the VM route and experiment with some other stuff if I went with Proxmox. I'm certain I'll go with either it or VMware.

Proxmox is easy and works well.

Attached: Screenshot-2018-5-12 proxmox1 - Proxmox Virtual Environment.png (220x740, 24K)

I'm a complete novice when it comes to storage (and most things to do with a homelab).
I see ZFS and Ceph mentioned a lot in relation to Proxmox. Is the Proxmox wiki the best place to begin to learn to use these?

I currently have one SDD and two HDDs in my server, hoping to run some VMs, containers, with the main storage users maybe being some small instances of NextCloud and some torrent service. Any tips for how I should lay things out?

ZFS and Ceph is not a must, though I recommend learning both of them.
The Proxmox wiki is probably a fine starting point, but reading the documentation on the official websites will give you more info.

Also in case you didn't know, like normal systems, keep the VM system disk on the SSD. Going to love those IOPS for multiple systems running simultaneously.

>I see ZFS and Ceph mentioned a lot in relation to Proxmox.
If you only have one server you don't really need Ceph and i'm not even sure if it's possible. ZFS is nice though and you should give it a try.

>Is the Proxmox wiki the best place to begin to learn to use these?
Yes. I've actually never looked somewhere else.

>Any tips for how I should lay things out?
SSD for system aka / and the two disks as storage for files and backups.

Thanks user. I know about the running VMs on SSDs at least. I work as a sysadmin with a large virtual infrastructure for a living, though I'm relatively new to it.

>ZFS
Want to sum ZFS up for me in one line? I'll be sure to read the documentation on it shortly.

Is anyone running personal music streaming service?
I was thinking about putting mpd on Pi and transmission for remote download...any tips?

>ZFS
A filesystem with Volumes Manager, Snapshots, Checksums, Compression and RAID integrated. Just read the wikipedia article.

Sounds good, thanks user. Reading briefly on the Proxmox wiki it says it needs a minimum of 8GBs of RAM, which may be a bit of a squeeze for me, currently have 24GB.

>You're installing those, right?
I'm updating it once in couple of months
Mostly torrenting with serving videos with samba and music over dlna, also it's a git server(gogs) for my projects.

Currently running a Surfboard SB6183 and an R7000 running tomato. Thinking of switch to a PFSense box and Ubiquti gear. I usually get solid speeds, would I get a decent speed bump switching? I don't do anything too "advanced" so much of PFSense would be wasted on me, just curious in performance improvements and just something to tinker with.

LXC+Proxmox is tits. Yeah, you can manage the containers manually, but setting up resources limits by hand is a bitch, and with proxmox you can just give a container additional storage on a mount point. I'm not sure how this is done because I could never get anything like that just using lxc/lxd commands on unprivileged containers. Fucking magic.
Add in nice things like a breakdown of resources, built-in backup, a decent network config area, first-class support for KVM, the all-in-one package feel, it's just a good bit of software.

Cons: The interface is not at all intuitive and the docs are somewhat sparse. There's a bunch of cringey YouTube videos with step by step walkthroughs of simple shit using a synthetic voice. Watch a few. They're painful, but the education on how to actually navigate the web ui is vital.

Protip: Setup a ZFS volume for use as container storage. This makes snapshots and backup way more efficient. Ideally you'd make a zpool over a couple of raw disks, but you can make a file-based ZFS volume if that isn't an option. I have two pools: One raid1 on two SSDs, and one raid1 on two 2TB spinning disks. The SSD pool is used for the base filesystems everywhere, the traditional disk pool is used to provide additional storage on a mount point to containers and VMs that need it. It's a very comfy setup.

ZFS does a huge amount of error correction and all kinds of fun stuff.

It comes with substantial performance penalties in relation to other stuff but allows for absolute integrity.

RAM is used for ARC, the more you feed it, the better it will perform in lieu of all the error correction and CoW fragmentation.
Expect to reboot your system rarely as it will wipe your ARC and you will have to warm the cache again by doing your regular stuff on the PC.

Posted from my ZFS mirrored laptop.

Attached: 1525146698249.jpg (444x332, 17K)

question, is there any reason to mess with containers (other than "its interesting") if I'm just running stuff that I want for personal use? I'm not trying to learn some particular skill or software package to get a job or use it in my current one.

Deploy a ton of services without using many resources when compared to VMs.

Also much closer to metal control rather than shelling into everything and piping your scripts over shells to VMs.

Containers gives you isolation without VM overhead. You can have one be Debian and another CentOS, you get snapshots and independent rebooting, you get good security isolation from the host, and you get the ability to easily migrate between machines or even maintain mirrors and fail over. These are all features you typically get with traditional VMs like KVM, but the overhead is almost zero because large parts of the kernel are shared between the host and containers. Spinning a new container, booting it, and getting a shell into it with LXC is two commands and 10 seconds.

I had no idea what I was doing but managed to configure my first home server this week. Thanks Jow Forumsentooman.

Attached: 1520143545699.png (198x280, 114K)

>I dont know what Photon is
>I dont know what a type 1 hypervisor is

Stop being a retard and just get the keygens

>kernels require large amounts of memory

Is there a mail package in debian that just werks?
I'm tired of reading hundreds of config files from 5 different packages to setup a mail server

I already set up postfix/dovecot. But spamassassin's spf validation wasn't working. So I started working towards courier, but that broke my dovecot install and there's no guides on the internet to set up courier. but I found postfixadmin, and was able to set it up, but that didn't fix my dovecot/courier issue

no wonder microsoft makes shitloads from the exchange servers

Did I say the kernel memory was the important part? No, I said the performance improvements are a result of sharing the kernel.
blog.ubuntu.com/2015/05/18/lxd-crushes-kvm-in-density-and-speed

If you used any normal virtualization and LXC you'd notice an immediate difference. Especially for network or CPU heavy workloads it just kicks traditional virtualization's ass. There is also memory usage benefit, it's just not as big a deal.

>blog.ubuntu.com/2015/05/18/lxd-crushes-kvm-in-density-and-speed
>The server with 16GB of RAM

This article was written by a retard, even in 2015 any hypervisor host would have far more than 16GB of ram.

>was able to launch 37 KVM guests, and 536 identical LXD guests. Each guest was a full Ubuntu system that was able to respond on the network. While LXD cannot magically create additional CPU resources, it can use memory much more efficiently than KVM. For idle or low load workloads, this gives a density improvement of 1450%, or nearly 15 times more density than KVM.

So this test proved nothing at all because who the fuck launches a VM or container which does nothing.

>while KVM guests took 25 seconds to start.
Oh hey look, a shitty hypervisor doesnt work right. Even my Windows VMs dont take this long on ESXi, let alone Photon.

This article is bad and you should feel bad for citing it.

> I said the performance improvements are a result of sharing the kernel.
No, its just not having to use Linux's shitty virtualization drivers.

I use Airsonic for my streaming. It’s in Java but it works fine and has mobile apps and a decent web interface.

What'd you do for hardware, ricky?

Used my extra pc(C2D E8500, 8gb ram) Now it's running happily and cool on the otherside of the room.

Is it worth it to buy an old optiplex and wack FreeNAS on it or just get a synology box?

Bamp

If you are too dumb to follow workaround.org/ispmail you should use something like mailcow.email/ or mailinabox.email/

>Totally BTFO that he's wrong.
>B-but the benchmark isn't a real world workload!
>Disregards the fact that a null test is a perfectly legitimate quality to compare.
>He thinks that virtualizing the network stack has zero overhead.
>He thinks that using an order of magnitude less memory is a negligible savings.
>Shilling Photon when Alpine exists.
Nobody wants to use your proprietary crap. In the big boy world everyone uses KVM and containers. Your mom&pop smb shop might have fallen for the VMware meme but real players are jumping that ship at an alarming rate.

Attached: 15145152817811.png (586x578, 37K)

>In the big boy world everyone uses KVM and containers.
No they don't
VMware is still ruling.

can someone teach me how to use rtorrent + flood with freenas

i'll pay u $10 if u help me get it working

Did you try flurdy.com/docs/postfix/ ?
Has instruction for both courier and dovecot. I went with dovecot and works like a charm

>elk
>single server
>virtualized backup
>virtualized nfs without dedicated nfs store
dude fix your naming and your entire setup, makes me want to clean up your setup

should I phase out ssh passwords entirely? I am thinking ed2556 keyfiles

Odroid C2:
>Router
>VPN
>DNS/DHCP
>Torrent daemon
>couchpotato/sickrage
>personal wiki
>mailserver
>fileserver

Raspberry Pi 2:
>Steam bot
>plane tracking with rtl-sdr
>backup DNS/DHCP

Feels gudman

except when it feels bad when your router gets compromised and your mail and files are being stolen. one purpose per system if you can't put a hypervisor on it

why are you tracking planes

There are only 3 ports visible from the outside. 22, 25, 443.
I'm not scared of some random chinese dudes that automatically scan the internet for shit. And neither am I for the dedicated people trying to get into my shit.

Because I can. And if you send that data to certain websites, you get a free premium account.

Why would you need vmware to support containers? You would just run Docker in a VM.

> In the big boy world everyone uses KVM and containers

I've worked for some pretty large global companies. I have yet to see KVM in use. It's always VMWare with a tiny number of Hyper-V installs.

i got a free dl380 g4 from work. i set up ubuntu server 18.04 on it, set up samba and i want to use it to store backups from my desktop. i've already done all the networking and shit and my desktop backs up there, but i don't want to keep it on all the time (power, noise, etc). this server doesn't have wake on alarm in the bios, how do i set it to turn on at a specific time once a week? it has wake on lan if that helps

Attached: 1511502140226.jpg (563x563, 14K)

>it has wake on lan if that helps
well you've kinda already hit upon the solution there haven't you? Have an RPi or something send it the magic wakeup packet on whatever schedule you choose. (some routers can do this kinda thing too) Have a script that waits for it to boot, sshes in, runs whatever your backup tasks are and shuts it down again.

>Have an RPi or something send it the magic wakeup packet on whatever schedule you choose

holy cow i haven't had enough coffee today, thanks. i already have the startup/shutdown scripts written, was just missing the turnon element

>>elk
How should i name the elk server?
>>single server
>>virtualized backup
>>virtualized nfs without dedicated nfs store
What?

>home server general
>no pastebins for beginners
I really don't know anything about making a server. My current plan is to build a computer with 4 HDDs to run in RAID 5 and a small SSD for the OS. Not sure were I'd be going from there other than connect it to the router and allow the other computers to connect to it.

I mostly just want a big network harddrive to put all my media and backup on.

a simple NAS should be fine for that. i use odroid as a cheap hardware platform for my NAS

hardkernel.com/main/products/prdt_info.php?g_code=G151505170472
hardkernel.com/main/products/prdt_info.php?g_code=G149142327280

There is actually two paatebins link. Stupid Op did a shitty job. Here is one of them.

pastebin.com/SXuHp12J


You will use it as a media server? Backup? Development? We need more info to help you more.

I don't have anything comprehensive but some starting points:
Look into network attached storage (NAS) and the software/hardware requirements. Don't buy prepackaged because they're overpriced 3x or shit.
If you think you need more capability look into hypervisors (though you probably shouldn't run a NAS on one). Some were talked about in this thread, I like proxmox. Running a single box with a dozen installed services is cancer without one.
You will inevitably get into networking while doing this and that's a major rabbit hole.

>You will use it as a media server? Backup? Development? We need more info to help you more.
Media server mainly. Though I was thinking more general purpose (I guess file server?). Sorry haven't thought this through other than building an auxiliary PC with lots of harddrive space and having a server to play around with and learn from.

That's why I wanted a pastebin. Maybe I'll get a better idea of what I want with a beginner guide. Haven't put any money in just a vague plan. Sorry.

Thanks. NAS sounds exactly what I want. I look into those.

Ryzen 2400G
8 Gb RAM (future upgrade to 16Gb)
M.2 cache module

Anything wrong with this as a home server?

>Steam bot
What does this do?

Look for freenas for a Dumb friendly NAS system, with a docker capability, or freeBSD for a more not so easy setup and a vast capability.

He mine steam cards from multiple accounts

>mine steam cards from multiple accounts
Interesting. Got a link to the application or the name of it?
Do you sell them for a lot of money or use to level up or what?

Do you always want to use a virtual machine when setting up a home server?

Not him, and I am curious about the setup for some threads now

I want to have the ability to create virtual machines, yes?

Is there any unencrypted level 2/3 tunnel with socks5 support faster than openvpn?

Yeah okay. I'll tell that to Facebook, Microsoft, Google, Amazon, IBM, and any of the dozens of IaaS or object store providers. I guess their strategy of using technology that actually scales is crap, and they'll just have to switch to VMware since some "pretty large global companies" use it.

You can do an unencrypted GRE tunnel but that's a general point-to-point tunnel and I think you're on your own to get some other socks proxy to send traffic through it. Easiest way to get a point-to-point tunnel with a socks proxy is ssh -D but that's going to be encrypted.

Why do you want it unencrypted? Is encryption actually a bottleneck for your use case? What is that use case, and how do you know that's the bottlenect?

I already have encrypted proxy (shadowsocks). It listens as local socks5 proxy, encrypts and sends data to remote server then unencrypts it there.
On top of that a have openvpn client and server which are running on top of that proxy. But openvpn is notoriously slow on router cpu even with no encryption and auth.

Considering that KVM is, what... "Free", it makes sense that those companies use it because it's one less thing the have to pay for. However, hospitality companies like MGM and Caesars, banks like ABN AMRO, AmEx, Wells Fargo, BoA, and plenty of other large multi-national companies rely on VMWare.

Isn't that mainly because of the enterprise support VMware provides? Not really applicable in a low-cost homelab environment.

Farming steam trading cards from asset flipping games on multiple accounts
You're looking for Archi Steam Farm

Attached: ASF.png (440x251, 10K)

>I'll tell that to Facebook, Microsoft, Google, Amazon, IBM
You mean those companies that have their own fucking datacenters in multiple regions around the world? Ofcourse they'll be using their own inhouse version of KVM that works best for their specific usecase.
I'm talking about all the other thousands of international/multinational/national companies that exist

Nice, thank you, will take a look.

Bump

The bots necessarily need to be lv 5 and up to get the cards, right? I am having trouble setting it up on docker, getting a lot of erros with the ASF file, even with the stock with basic configs

FreeNAS or Debian with Samba for a comfy Storage server that will be accessed from mostly windows machines?

Yes, Go for freenas installed on a 8gb USB drive

I have no experience with Docker, you're on your own there.
The bots need to be activated; you need to place 5$ on the account.
Right now I would say it's not worth getting into this. It was barely worth it back when I started 2 years ago

>when your router gets compromised

You're doing it wrong, man

yes, if you can enforce that all clients use keys

if that's the case you might be interested in setting up ssh certificates also

why is that dog's nose so long? does it have autism?

Microsoft uses Hyper-V internally...

Doesn't sound like VMware to me.

Will any Rpi's do well as a home server for serving files over LAN and as a test server for web shit? Is there any good alternatives?

RPI will work for serving files, speeds will not be the best since it only has a 100mbit NIC. It works well enough for a testserver.

I'm thinking of building myself a half decent, quiet home server.

I'm thinking the spec will be:

>Ryzen 2400G
>B350 motherboard
>8Gb RAM
>Later on a m2.sata as a cache module

Thoughts on this build?

It'll be used for VMs, Plex, file serving, dev work.

More ram

8Gb is what I can afford now, but would look at upgrading it as RAM prices come down.

Warning, you'll hit a wall fast at 8GB with VM's + Dev work, especially if the Dev OS runs Docker shit
I know about the RPI ethernet and USB sharing the same bus, a bit afraid that'll drive me nuts, hoping for recommendations on superior RPI clones.

Damn, yeah, I was going to have a couple of VMs and Dockers running. I'll have to see what I can do.

16 GB minimum for VM(s) honestly, if you wanna get the best performance from it (ie not have it be a slideshow when you compile 'hello_world.c'). I don't think RAM prices are going anywhere, so you might need to just pay up as I can't see them dropping anytime soon.

Synology+Docker

Thank you for that pastebin user.

Complete novice here. What's the advantage of running a home server? I have a couple Vultr vps going and thought it might be fun to try and host my own toy projects out of my garage or something.

But the law suit?

Apple tier solution

See the DDR3 ram prices in your region, if you can get 16gb for a reasonable price, bulid a server with older generation hardware, like some 4xxx Intel series.

Look for orangepi on Ali, see if they get you a better resources/dolar ratio. Some of them have 10/100/1000 nics and wifi(if that interest you) and can go up to 2gb of ram(if I am not mistaken, there was a model with 4gb). It can be a more than capable NAS and test server for web stuff. Not more than that, maybe in the top model you can get docker with a low resource container working.

I still can't see the problem. What should i change?

It just works™

I am trying to connect from Arch to my Windows PC using Samba, but having some problems.
>smbtree returns nothing
>smbtree -d3 shows "SPNEGO login failed: The attempted logon is invalid. This is either due to a bad username or authentication information." at one point, but never asks for any username/password
>smbclient -L 192.168.0.10 asks for authentication and when I put the correct one it gives me Connection to 192.168.0.10 failed (Error NT_STATUS_RESOURCE_NAME_NOT_FOUND)
>Failed to connect with SMB1 -- no workgroup available
I have checked that SMB sharing is enabled on the Windows computer, I'm using the correct server IP, correct Workgroup name, etc.
Any ideas?

Did you guys see the silent computer thread? So fucking awesome. I want