/hsg/

Didn't see a /hsg/.
I don't have the OP copypasta edition 2.

Last thread Who has the biggest rack? Post pictures.

Attached: hsg2.jpg (3968x2240, 1.13M)

bump

The LTO5 was external type and sold for 300 USD ~ 1200 PLN which was the cheapest price I tracked in last half a year.
The tape itself is up to 13USD / 50 PLN and they are at least sealed.
Pretty good redundancy solution to NAS.

What can I do with 300 bux.
Also what happens if i don't use ecc.

On ZFS filesystem w/o ECC you can hypothetically got this "scrub of death". This is an issue in ZFS where scrubbing is a "maintenance" task.
I did not find any thread where user had confirmed such occurence though.

Don't have any idea about how ECC is significant on other solutions though.

OP's pic will be around 300 bux. Bare-bone server, no disks. Its a fun project.

What do you want to host?
I would say don't worry to much about ECC in general.

Haven't seen a /hsg/ in a while, currently running an R710 with 18GB RAM and a Xeon L5520, nothing fancy but it works for what I need, use it for lab stuff only, learning AD stuff at the minute, wanting to start learning Linux admin stuff soon.

Thats an awesome VM host. post pics

As user said , however Gen10 Microserver is fucking cancer.

Not at home sorry, but it is just an R710 on top of a box. Using ESXi as the hypervisor. Also been playing with Veeam B&R.

Why tho?

I would like a torrent client with web support that allows me to pick the download location for the torrent, movies in one drive, os's in another drive or folder, etc. Deluge does not have that feature. Can I get a couple recommendations

It's just an intel shill that's in every thread. Gen 10 microservers have an X3000 series opteron proc

You want a torrent-box you can control over a web interface?
qBittorrent can do this.

what nas is this?

Attached: .jpg (1500x1500, 194K)

>> Dell Precision T3500 Intel Xeon X5650 6x 2,66GHz 12GB 2TB FX 4800
For 79€. Should i?

what do you use your server for?

yes

yes, i would

>Who has the biggest rack? Post pictures.
Me! Me!
Eaton full height.
Picked it up on the side of the road.

Attached: Rack.jpg (4032x3024, 1.95M)

Nice man, whats it doing?

what should i use to manage my containers

What kind of containers?

LXC
was looking at spacewalk and foreman

why the fuck does plex make you log-in to their services if you use a (lan-only) domain name? why does it burn-in subtitles 99% of the time with a shitty transcode? why does emby expect me to pay for basic functionality? why does jellyfin only have like 2 working clients? kodi seems the most promising but it's only a client. i just want to watch shit on my TV with some basic metadata fetching and a decent interface.

options>network>add local ip to the setting to avoid the login screen

i have tried both specific IPs and IP/netmask. i'm still forwarded to plex.tv when i use domain.lan but not when i use the ip.

On plex app for tv? Login once, for browser based I never have to login.

I don't like the fact that you have to authenticate __at all__ through their services, even if you only use plex on your local network w/o remote access to the server.

I didn't authenticate until my gf wanted to use plex on her tv. You don't need to sign in if you just use the browser.

>You don't need to sign in if you just use the browser.
Local discovery works fine, it's the web client at my domain.lan address that is requiring me to authenticate.

Attached: Untitled.png (1124x656, 333K)

Until I got the power bill it was running 3 Intel-based Xservers, each with 32Gig of RAM and lotsa SAS drives.
Now It sits cold waiting for the solar panels to arrive.

Best cost / reliability for hard drives?

One of my 4x4TB ZFS drives just shit the bed and has just gone out of warranty. Server is currently shut down until I get a replacement.

Looking to get another 4TB drive but willing to start migrating to larger drive array if its worth it

is it 100% necessary to use one of those complicated machines? (well dunno about complicated but different from normal pcs)
also which os is best for running a low spec server for a big pool of uses? (low spec game servers, web server, maybe mail if i get a domain and NAS)
pic for attention

Attached: fell for the meem.png (1600x900, 1.39M)

I want a VM server, what specs should I go with, if I want to run Proxmox and have several headless VMs?

How do i into /hsg/? I want to learn Linux admin, maybe run lowspec game servers, maybe stream media and stuff like that for as cheap as possible
Similar to

Asustor.

Old desktops from local charity shops, yard sales or being given away. Install a server OS. Job done.

OK, thanks

Memory and Disk Space are the most important.

Get an old pc. Look for 6+ sata ports.
get 4 old hdd's and fuck around with raid arrays
get 2 small ssd's in raid 1 (or just one) and put debian on there.

Do remember, if cheap is what you're after, its better to spend a little more on a small eco friendly box than running a free Pentium 4 24/7

a scb would be even better in that case.

What user said Also try to fit your machines on ssd's

kek that's just great

im , running some games is easy as long as it has enough documentation and you know how to into port fordwarding, if you dont know how to use linux in general check some capture the flag games to get a grip at basic cli usage and the like, also pick a babbys first distro since those tend to not need as much config, for my first minecraft server in an external pc i just used debian out of the box and only needed to install java

I plan on replacing my current shitty server with an entry-level enterprise grade masheen. Anyone have experience with HPE ProLiant ML30 (gen10)? Is it good value for the money? Is the box cpu fan that comes with it sufficient, or should I get a replacement right away? Anything else to keep in mind? Also plan on putting 4 helium filled WD 10 TB drives in there, I hear those are decent? Should I go this route, or build my own server instead? I will primarily use my server for plex, as a proxy server, a seedbox, a web server and occasionally a game server (minecraft, zomboid etc). Will probably run more on it when I feel bored.

any way to find out what kind of drive is in particular models of WD external drives?

Where does one find a supermicro sc826? How much do they run?

I think i read that its posted on the bottom of the box in small print. Otherwise shuck and try to find it on the drive itself.

tfw retired a bunch of town's old r730s today
tfw the disposial companies will just scrap them

You cant grab any of them and just shuck the drives? Id love to get my hands on one.

no-one cares enough to take reponsiblity probably
i've never seen used business servers being sold here in nordland anyways

Because any that are worth selling, or don't get destroyed as part of a secure destruction program, are sold in bulk to liquidation companies who then sell them cheap to start up companies that need cheap infrastructure to get up and running.

Shit i know its >reddit, but you can sell that shit on Jow Forumshomelabsales and people will buy it. Maybe the market doesnt exist in europe though.

Just fucking around - want to get more into vuln/siem/containers shit, use more linux wherever possible - been working with windows and vmware too long

Attached: Capture.png (2070x1210, 181K)

What box are you running?

Also move my vm storage off the box it's on. I'm doing nfs shares off a raid 10 array on winserver which was intended to be backups and media but now it's a catchall for everything.

3 Dell R610s for esxi. I have an R710 just for Plex w/centos. even that uses nfs shares

Is proxmox the best virtualization solution?
When would you use a VM versus say LXC?
Example, if you want to run a low spec game server would LXC (on proxmox) be enough?

Anyone have tips for sharing your files with a multitude of devices?
Currently running a Windows 7 File Sharing server, and while it works fine with Win 7 and later, most other OS have issues.
98 doesn't support whatever encryption is used, XP sometimes works and sometimes doesn't, MacOS 9 doesn't connect and OSX is a pain in the ass. PS2 needs a special configuration and PS3 is even worse. Every single Xbox works just fine though.
Would I be better off running some sort of Linux or *BSD config? Do they allow running a number of completely different configs for different devices while sharing the same files? MAC address based configs would be good enough, this is not an internet facing server.

Attached: 98pwderror.png (250x119, 2K)

Is your power bill stupidly high? Im planning for when my fiancee and i build a house and designing a server room. Im considering a supermicro sc826, a couple dell r series or rolling my own. Probably wont be for another couple years.

Personally I haven't really used proxmox much, but if you're only setting something up for a single purpose it's better not to introduce a layer of abstraction like virtualization and just do your install directly onto the hardware. Otherwise a hypervisor is a hypervisor, choose whatever you want imo

>Anyone have tips for sharing your files with a multitude of devices?
>Would I be better off running some sort of Linux or *BSD config?
Yes. Create a Linux "NAS" box. Probably put your drives in a RAID / snapraid / RAIDZ / ... array.

Then run Samba and/or NFS plus probably ssh to get the files accessible over the network. Augment with Syncthing, Jellyfin etc. as needed. You can run these from docker or installed normally.

If you want to get anything or everything accessible from the internet, maybe run Wireguard/VPN on your router or the server to create a secure enough path into your network from the outside.

>Deluge does not have that feature.
Pretty sure deluge can do that with labels. That said its webui kinda sucks yeah

> Do they allow running a number of completely different configs for different devices while sharing the same files?
This is a yes, BTW.

You actually should have multiple ways to do this on Linux.

But it may be a good idea to just use the lazy way with running containers on different ports for the individual devices rather than the most fancy ones with automatic device detection / traffic marking & rerouting or who knows what.

Also... in the end, what do you need to support a PS2 or such for? If you want a media playback device, a $20-50 shipped Chinese HTPC thing or ARM SBC will do it with less of a fuss and less power consumption and more capabilities.

Any compatibility issues with Samba I should know about? We're talking literally two dozen different client-side implementations so every little counts.
Can you run multiple samba instances on one machine and just redirect as needed or do I need jails/VMs for that? My current hardware is complete crap but if upgrading lets this work, then I am willing to do so.

No - but I also run these on a single PSU instead of two, basically 350-400W a piece and just spun up the third recently to fuck around but will probably power off. I started with whitebox stuff but wanted to get more experience with more enterprisey pizzaboxes. I used to run blade chassis for a living so not too much different, just a checkbox.

I don't need media playback, I need file transfer.
PS2 can play and transfer games over SMB. And so can many other devices.
Only system I use for media files, are my main Win7 desktop and my Xbone with VLC, both of which are just fine with any standard SMB implementation, no transcoding required.

I've tried a bunch of difference linux distros and products like openmediavault, freenas, etc with software and hardware raids but nothing comes close to the speed of windows nfs for me, my network might be gay but I don't know why that is otherwise

Scrub of death is a meme, in order for bad RAM to silently corrupt checksummed data it would have to create a hash collision.
The bigger issue would be "data gets corrupted by bad memory BEFORE being checksummed" and since ZFS batches all of the writes in RAM before committing that's a much more real possibility.
If you're just storing anime tiddies then the impact of losing a file or two isn't crucial, so consider the value of your data. But since ZFS loves having fuckloads of RAM for caching, you'll probably end up on a server platform and RDIMMs are cheap.

Speed is not an issue as long as it's atleast a 100 meg, since that's what most of the devices are limited to anyways. Latency is something I care more about but even that is not a dealbreaker.

> Any compatibility issues with Samba I should know about?
Well, Samba supports almost anything they came up with on Microsoft and Apple's end, but it's not an entirely simple piece of software.

You'll just have to start trying what works.

> Can you run multiple samba instances on one machine
Yes, but the caveat is that not every distro is designed to make this easy (even if it has systemd with its capability to do parametrized starts of services).

You can of course copy and edit init scripts and all that, but if you go far with this, probably just use containerized samba with docker-compose - It'll probably be easier and quicker.

> I don't need media playback, I need file transfer.
Then I guess you won't need to run that many SMB instances anyhow. There weren't that many widely used settings on consumer devices, as far as I know...

Configuration issue. Mdadm RAID runs faster than GBE networking on halfway decent hardware, and Windows NFS/SMB is not faster than the equal Linux implementation.

Just try it and then try to optimize, I guess.

Hey, I've chosen OpenMediaVault for my SMB Share NAS since I don't have the hardware to use FreeNAS and don't want to buy a synology/qnap or whatever device.
I plan to have 2x6TB running in mirror, contains
>music/anime/manga/doujin/photos/shadowplay archive
>sharex destination folder for my laptop/desktop because I hated trying to look for screenshots that were possibly taken on a different device
>motion surveillance recordings capturing up to a week then deleting
>some important text documents + keepass database that I have on it for redundancy outside of my several other backups for said files
What filesystem should I use and is there anything I should look out for with setting all this up?
I do plan to encrypt these files also

>Then I guess you won't need to run that many SMB instances anyhow. There weren't that many widely used settings on consumer devices, as far as I know...
You'd be surprised.
Some require auth, some don't accept any at all. Some require specific versions.
Some idiotic implementation require everything in the root folder of the share, most won't accept the root folder at all! Instead they require a subfolder in the root folder so they can access anything.
If all I needed was a standard SMB instance, then my current Win7 shitbox would suffice.

Attached: 1465315072043.jpg (500x362, 13K)

Forgot to mention that some systems require very very specific folder structures too! Which is a big reason why it'd be absolutely lovely to run multiple instances of Samba on the same set of files.

Okay. Well, Samba supports most of this on a single instance anyhow, and you can pretty easily run multiple instances of it with other init scripts / in docker / in VMs for the rest that can't be done on one running instance.

Still don't expect this to take just an hour. It maybe will be more annoying with that many devices with apparently exotic needs.

> very very specific folder structures too! Which is a big reason why it'd be absolutely lovely to run multiple instances of Samba on the same set of files.
Yea, you definitely can - at pretty much every level actually.

Linux with the right tools (plural... definitely ore than one way) can simulate other filesystem hierarchies overlaid on / derived from the real one, docker[-compose] can, samba also can. The last two will likely be the most relevant things to use first.

So, what kinda hardware would I be looking for this setup then?
The system is idle 99% of the time and the rest is one or two machines reading/writing it.
I've not used Linux since Arch had an actual install wizard back in 2009, so Docker is a new thing for me.
Current system is an Athlon II 160u for power consumption but I can get just about anything that fits in the same socket. RAM is 16GB because I got it cheap one time, so that is no issue.
Install time is also not an issue.

Ok so in single PSU is your server idle most of the time? Do you have all the bays populated? Are you running unraid?

Also for a home datahoarder would you reccomend a whitebox build or enterprise hardware?

No, vms are running constantly. I have them separated into test (mess around stuff on segregated vlan) and prod (2 hosts, stuff that is always running to augment my home network). I have like 2 bays populated just for the install of esxi in a mirror. Actual storage for vms is on a windows "nas" offering nfs shares to the esxi hosts. that storage is hardware raid 10 as in I bought a raid card, it's not software defined. That data gets backed up to a QNAP nas I used to run as my production storage, and it's raid 5.

I'd recommend whatever you can afford. Pick something you're comfortable with and ideally has a good forum or userbase when you have questions

Best way to learn enough about AD to BS my way through a sysadmin interview in less than 2 weeks? I've set up several Linux servers for home use before but now realize industry standard is now leaning towards hypervisor VMs or containers so I'm crash coursing my way into that. It's difficult making the jump from home use to enterprise setups and there's so much to learn you don't really encounter until you work directly with in-use systems that I'm afraid I'm going to fumble through the interview.

Attached: 700.jpg (200x313, 10K)

>So, what kinda hardware would I be looking for this setup then?
Probably... some x86_64 10 years old or newer, and preferably power efficient. Hard to tell.

I don't think you need to start preparing new hardware in advance, just try it on your current setup? It's not Windows, you generally won't have to reinstall much or anything if you switch out the machine later on, and the configurations certainly can be moved too.

don't lie faggot, theyll know. set up AD at home and learn, pirate cbt nuggets figure it out

I mean I installed windows server while at teach school and used some basic AD functionality and services but firewall rules prevented anything but local stuff like DNS, fileserver and PXEboot. I focused more on doing these things on Linux systems like Fedora/CentOS and Ubuntu/Debian because I was naive and thought I could find a Linux admin role in my city easily. As I've gone through my 2 years of on site tech support role however I've realized AD and GP knowledge is important to almost every organization and the ways I set up things on bare metal while learning isn't exactly how it's done or should be done in production. I'm basically hoping I can convince them I have a strong base of knowledge on how things should be done despite not implementing them directly because I'm coming from a large organization and have a strong willingness to learn and can do so quickly. I don't even mind if I'm not making the big bucks, I just want to be challenged in a technical manner and stop replacing phone cords and moving desktops every 3 months because administration has 0 long term planning abilities.

Maybe I worded my question wrong, what specifically when it comes to system administration should I narrow my focus onto to prepare for the interview? I currently have my Net+ cert and 90% done studying for the Sec+ so that's some of my knowledge background when it comes to networking and standard protocols for implementation.

>I don't think you need to start preparing new hardware in advance, just try it on your current setup?
Not only would I rather keep my current data available for now, but I prefer to do new systems in separate hardware purely because I can. Current system is A II 160u/990xa and it looks like my test system will be M4A87TD/FX6100 and I'll probably just disable cores till I know what I can run it on. Already have all the hardware just needed the motivation to start fucking around with the software.
Thx guys!
Suppose I'll be back in a few weeks to complain how this shit did/didn't work.

Just say all that, you're over thinking it. Just don't lie, you can oversell yourself in areas but don't go too hard and you'll be fine

>M4A87TD
Hm. No clue if that makes more sense than some modern onboard Intel 5005 or 200GE or even Ryzen in terms of cost of power consumption over time.

But probably go for it, yea.

Literally the only reason I will use it is because it has been laying in a closet for the past 3 years without use now. And all my other AM3 mobos are use.

No reason not to try it then.

Does anyanon have any experience with a Dell Poweredge r720? I would like to thow an old gtx 970 in one for transcoding and vms, but I still need to get the processors and ram you think it would be worth it to get 2 Xeon E5-2690 v2 ([email protected])? they run for about 300 a piece it looks like

I think it'd be worth considering saving yourself the power consumption running cost and initial cost of the transcoding requirement by just fixing the probably few playback devices that can't handle whatever modern video format you're using.

Can someone explain to me the pros with having a home server than just having a lot of storage in your PC?

I'm sorry, by transcoding, I mean of using handbrake to compress Blu-Ray rips with h.265. currently I am using a 2009 apple xserve with no gpu and 2 xeon quad cores @ 2.26 and it takes about 2 hours for a 30min clip and I got the r720 chassis for free from work, I'd be willing to invest but im not tying to break the bank, yah know?

One big reason would be that your PC isn't very power efficient, and this runs 24/7.

With hardware as cheap as it is and typical power costs (and maybe some air conditioning to amplify it), this can cost you more than building a dedicated machine.
Another reason might be that your main machine is Windows and you want to use the more trustworthy / more flexible BSD/Linux as host.

Well, there are many more possible reasons, maybe you also just run out of drive bays in your main PC...

GPU encoding quality is generally pretty bad compared to software encoding for H264 and H265. If you want speed and quality with H265 you may want to look into something like Ripbot264 which allows distributed encoding among multiple systems (up to 16). I've been messing around with it recently with 4 different PC's (for 16 cores total) and it works pretty well.

With all my home server shit running I'm beginning to see some overall lan slowness, anyway to investigate root cause or lessen the effects?

Interesting, I'll check out Ripbot. T.Hanks

Do you have any monitoring set up? Log into your firewall and see if you have excess ingress or egress

Setting that up now (between splunk, ELK, graylog, netdata) from Ubiquiti FW though switching to pfsense soon. Ubnt is shit desu, "pretty" dpi stuff not worth it. thanks for the heads up

Are Windows Storage Spaces a meme or they are actually reliable?
I'm using Windows Server 2019 Database.

Attached: 1472996780291.png (534x550, 481K)

meme, I've had to rebuild a storage spaces array and it sucked shit. granted this was 2012r2 SS and I've read it's much better lately but that was a bad time. If you're going to do software raid choose something you either have enterprise support for or a good user community

Did a drive fail?