Is it better to have a single powerful server, workstation, development, gaming, HTPC...

Is it better to have a single powerful server, workstation, development, gaming, HTPC, NAS machine or to have many less-powerful machines serving discrete functions on the same network?

Attached: 8047r-tf-441x400.png (441x400, 322K)

Other urls found in this thread:

ebay.com/itm/Cisco-UCS-6120xp-flashed-to-Nexus-5010-5-0-3-20-usable-10Gb-SFP-ports/202288591088?epid=80554362&hash=item2f1956f0f0:g:7soAAOSwJWla0iRu
en.wikipedia.org/wiki/Strauss–Howe_generational_theory#Generational_archetypes_and_turnings
ebay.com/itm/HP-H3C-S5800-32C-A5800-24G-JC100A-24Port-10-100-1000-Switch-Good-Condition/222899247100?hash=item33e5d497fc:g:PNIAAOSwspdaprhW:sc:UPSGround!60645!US!-1
ebay.com/itm/671798-001-COMPATIBLE-10GB-MELLANOX-CONNECTX-2-PCIe-10GBe-ETHERNET-NIC/350983607686?epid=1604121398&hash=item51b840d586:g:LCIAAOSwiDpa~GUh
twitter.com/SFWRedditImages

I'll throw all my hdds and routing services to a low power low budget low carb pc.
Then replace my main Kaveri with something from current ryzen in a few months.
I do sometimes some kernel module development and I usually have to reboot many times.
I don't like to stress my disk with needless shutdowns and then power ons.

This isn't a question anymore because the answer is objectively virtualization on one workstation.

Can you elaborate? Sounds interesting.

>tfw you will never have the chance to buy a coolermaster atc-410 case

Attached: cmacs410_pspc.jpg (412x407, 28K)

Not him but I guess he means setting up a small army of VMs on one powerful machine to act as multiple servers, that's exactly what I'm doing now on my always on windows 7 workstation.

Forgot to mention that I have a seprate HTPC and hardwarefirewall

I would say virtualization (with redundancy) and each function is isolated. For most compliance requirements any sort of node must only perform a single function, so that if it's compromised the breach is limited to that one function and node.

I do the latter, but for a specific reason: I'm cheap as fuck. It costs less to run my 5 servers in the winter than it does to run central heat for the whole condo when I'm only ever in one room.

The answer is having multiple virtualization hosts at home

Attached: Screen Shot 2018-05-24 at 2.28.09 PM.png (3360x2100, 460K)

Gee, it's not like there are thousands of different scenarios that are optimally served by different configurations.

nice tabs faglord

For 24/7 services yes. But a 1000 core gaming/dev rig that has to be kept on overnight just to have a torrent VM running is a waste.

Kill yourself

>not having a gf to laugh at incels with

Attached: Screen Shot 2018-05-24 at 3.19.45 PM.png (3360x2100, 476K)

I don't think you have a gf, enjoy your 2.59 TB of anime

>2.59 TB
newfag

Attached: Screen Shot 2018-05-24 at 3.28.11 PM.png (728x908, 49K)

>not mining with your free resources

you should have a server with a shitload of storage in some kind of hardware-reduntant pooling (raid1/5+, zfs, etc) and as much processing power as you need to do whatever shit it is you want to do. that server should be running a virtualization host, and you should have a bunch of VM or containers for all of your shit. you should ALSO be running a local rolling or incremental backup of your data and/or images, and an offsite full backup on some interval, like every 1 or 2 weeks.

the services running on this server can range from something like freenas with an smb share if you're entry level, to a full production-worthy (ok, lab-worthy) network environment: routing, firewall, authentication, dns, vpn, etc. check out the awesome-selfhosted list on github for some ideas for services to run.

you should have a workstation that suits your needs. an *220 or something if you're a NEET, a "gaming computer" if you're a child, a consumer laptop if you're a normie (but you have a home server, so you're not a normie), a modern ultrabook or business laptop if you actually do work for a living, etc...

you can use your old/unused workstations as servers. laptops make neat lab servers because they have low power consumption and integrated battery backup. for a fun project, use them members of a virtualization or containerization cluster, managed by your main server. try to build a system that tolerates nodes joining and leaving arbitrarily. try building a network with linux, bsd, and windows hosts all cooperating. try using a configuration management tool to set up your network. lots of fun to be had.

CPU mining doesn't cover the electricity costs even in third world countries.

>He is actually proud of this
Go watch your 3 years worth of anime and come back when you're 18.

Not everyone needs a shitload of storage. Combining NAS and firewall in a single device is bad practice even if they're virtualized. Virtualizing a single local-only SMB share is a waste of set-up time and hardware resources.

Second option, because the development.gaming HTPC workstation doesn't need to be on 24/7 and the server/NAS may need to be (or be more convenient to keep on).

Jow Forums keeps their "work"stations on 24/7 just for posting in uptime threads

Attached: hdd uptime.png (203x50, 3K)

Anime website.

Attached: 1525452470919.png (469x704, 550K)

That can pretty easily waste as much in electrical power costs in a year or two as it'd have cost to set up a vastly more efficient NAS/server/torrent box.

Well, it's your money, and prices of electricity vary I guess.

I've got a single main server that runs 24/7 (close to that anyway). It handles everything. Torrents/Media Streaming/File sharing/Remote access. I got a small nas that is just for full backups of main server data. It stays shut down when it ain't in use. Finally got an HP Micoserver that runs FreeNas. It stores a 2nd copy of core data. Movies/pictures/music/porn, stuff that took me ages to acquire and I'd prefer not to have to redo it all if something happened. My pictures, well, you can't redo them. I keep it shutdown as well.

Not a weeb, its mostly movies.

>18
i'm old enough to have gray hair in my beard

>i dont know what hyperconverged intrastructure is

>hyperconverged intrastructure

Yeah, as if that's relevant for home use.

>tfw this is my boot drive

Attached: 2018-05-23-1527119030_964x500.png (964x500, 80K)

>bad block total is nonzero

Once those start appearing, it's usually not long till it's lights out. You better not store anything important on the drive.

>that gray haired boomer who browses Jow Forums and PUA crap and prides himself on hoarding movies

>he doesnt have a scale out file server at home

Attached: Screen Shot 2018-05-24 at 6.02.10 PM.png (3360x2100, 835K)

nothing that fsck can't take care of.
But yeah, everything is backed up, no worries. I'm just wearing this one out.

>boomer
millenial

>that 30 year old boomer who can't even recognize overused memes

>seriously acknowledging bixnood

Attached: 1519350818636.jpg (660x868, 65K)

Attached: Tripcode.png (2366x1536, 587K)

Wow, you bought a bunch of disks and set up some virtual machines! That's super rad! Are you the hacker they call 4-chan?

Attached: how to hack computer .png (567x331, 31K)

>I wish i had cases of disks

Pretty sure there are people on Jow Forums with more disks. Your pathetic shilling of your setup gets kinda annoying, nobody except yourself thinks it's impressive.

I have 3 cases of disks

Attached: Backup Disks.jpg (2448x2448, 1.61M)

you should have a file server which is fast enough to saturate a gigabit network. in this server you should have at least a 2 drive raid-1, using a free open source software raid such as mdadm. cpu and ram requirements are extremely modest; a 1.8ghz core2 with 1gb ram is more than enough. the file server is the rock in the network, its reliable, stable, dependable, trustworthy. you set it up once and let it store all your files for years and years

then you should have a desktop. it should have an ssd, a gpu, fast cpu, lots of ram. it does not need hard drives

Not bad.

>you should have a file server which is fast enough to saturate a gigabit network

Attached: 10GbE (S2D SOFS Rebuild).png (3360x2100, 533K)

30 year olds aren't boomers you down syndrome. "Boomers" refers to babies born just after WW2. A millennial is anyone born after 1980, so the oldest millenials are now 38.

Get educated, spastic child.

>virtualization
This actually isn't the correct solution for the average home gamer.

Cloud services like AWS have a fixed infrastructure cost and need to maximize occupancy on their machines despite tenancy being malleable, with tenants requesting and forfeiting resources essentially at random.
The only solution to this is VMs, but VMs come in many different flavors, which is why you have Amazon EC2, Lambda, Heroku, containerized image deployments on Kubernetes...

Anyways, VMs are the solution to malleable resource usage at scale.

But for people at home? The only conceivable benefit a VM can bring you is isolation between services--if one service goes down they shouldn't be able to affect other services running on the same machine.

But why wouldn't you just run multiple machines that are sized exactly to run the job it was meant to run?
There is otherwise nothing that a VM solution brings to the table.

Conceivable arguments can be made for having fewer power supplies overall, and thus wasting less electricity at conversion, but there's no reason one power supply can't power multiple machines.
Rackmounts exist for this exact purpose.

In the end the only good reason to use virtualization at home is because you want to.

Easy snapshots
Testing new software and painlessly switching back to the old one without having to redo the installation
Size of one machine vs many

There are plenty of reasons, in the end only autistic people have the need to always circumvent the slight virtualization overhead.

I suppose an argument can be made for "automatically scaling up the resource allocation towards heavier processes", but that honestly sounds like a "you fucked up" kind of problem.

>A millennial is anyone born after 1980
82-04 acording to Strauss & Howe who coined the term

>But why wouldn't you just run multiple machines that are sized exactly to run the job it was meant to run?
yes, why dont i have a two racks filled full of machines for my 89 VMs, that makes far more sense than just having 2 hypervisor hosts

Good job. So you have a 10 Gigabit network? How much does such a setup cost (presumably if your computers don't have it onboard, you would need at least 2x 10G PCI cards and a 10G switch)

I've been thinking of upgrading past gigabit and wondering what the cheapest option would be

>easy snapshots
Deep Freeze has been a thing for a while and it doesn't rely on VMs.
>size of one machine vs many
On the contrary I think you'll find that scaling up is usually more expensive compared to scaling out.
sounds like Kubernetes was made for you

I've seen some studies stating the millennial generation goes back to 1977. Then others who class 1977-1985 babies as Xennials. If you're under 40 you're likely more inline with millennials. I've never seen a study or article calling 30 year olds "boomers" though, that's crazy. Even their parents are likely Gen X.

>you should have at least a 2 drive raid-1
RAID does not protect against file system failure, ransomware, or accidentally 93mb of RAR files. For a home server, I'd recommend using the second drive for online daily backup, the third drive for offline manual backup, and only if you can afford 4+ drives you do RAID1. (Plus, hardware RAID limits your hardware choices and software RAID is kinda shit)

>then you should have a desktop. it should have an ssd, a gpu, fast cpu, lots of ram.
Not necessarily, if you don't play vidya or edit video, then chances are a docked laptop might be more flexible.

Thats kinda messed up. i was born in '83 but Im the same generation as someone born in '03, who isn't even 18 yet?

I've been in the work-force for 17 years, and I am the same generation as a 14 year old?

I think like late 70s to early 90s should be its own generation

Honestly it should be separated by "relatively young people who experienced life before 9/11" and "those who didn't"

>RAID does not protect against file system failure, ransomware, or accidentally 93mb of RAR files.
No shit; you do not need to point this out every time someone recommends or mentions raid.

raid protects against drive failure. drive failure is common enough that you *should* protect against it.

backups do not protect from drive failure. sure, everything you backed up should be OK; but everything since your last backup will not be

>people STILL fall for the "XX year old boomer" bait

Jow Forums is really fucking stupid for a "technology" board.

>yes, why dont i have a two racks filled full of machines for my 89 VMs

Well, why don't you? That would give you a WAY bigger Jow Forums e-peen. It's not like you're doing anything constructive with those VMs.

Both.

>drive failure is common enough that you *should* protect against it.

I've used dozens of drives over the last two decades and maybe one of them died. I've fucked up by accidentally deleting something I shouldn't have deleted, mishandling scripts, etc. way more times. That's why I think regular backups must always come before RAID.

>sure, everything you backed up should be OK; but everything since your last backup will not be

This is an issue for servers and corporations, not so much for home users. Automatic daily backup, plus maybe manual backup when you e.g. finish some big important chunk of work, is generally enough.

>How much does such a setup cost (presumably if your computers don't have it onboard, you would need at least 2x 10G PCI cards and a 10G switch)
Cisco Catalyst 3750Es can be had for $100, Nexus 5010/5020s can be had sometimes for under $200, NICs are dirt cheap

ebay.com/itm/Cisco-UCS-6120xp-flashed-to-Nexus-5010-5-0-3-20-usable-10Gb-SFP-ports/202288591088?epid=80554362&hash=item2f1956f0f0:g:7soAAOSwJWla0iRu

>I've seen some studies stating the millennial generation goes back to 1977
They're wrong.

en.wikipedia.org/wiki/Strauss–Howe_generational_theory#Generational_archetypes_and_turnings

Also if you want something between the catalyst and nexus, there is the HP 5800 which has 4 10GbE ports and can accept a expansion card for 4 more

ebay.com/itm/HP-H3C-S5800-32C-A5800-24G-JC100A-24Port-10-100-1000-Switch-Good-Condition/222899247100?hash=item33e5d497fc:g:PNIAAOSwspdaprhW:sc:UPSGround!60645!US!-1

And NICs are under $20

ebay.com/itm/671798-001-COMPATIBLE-10GB-MELLANOX-CONNECTX-2-PCIe-10GBe-ETHERNET-NIC/350983607686?epid=1604121398&hash=item51b840d586:g:LCIAAOSwiDpa~GUh

>On the contrary I think you'll find that scaling up is usually more expensive compared to scaling out.
No, actual physical size. Running most of your needs in one matx case has it's advantages in tiny apartments. Many devices need many shelves and cause a lot of cable clutter.

For a business, virtualization is definitely the way to go. It also lends itself to cloud environments if your organization is allergic to CapEx or has a business need for the scalability IaaS / PaaS can facilitate.

For home use, whatever the fuck fits your use case and resources is fine.

I have a NAS with plenty of RAM, so aside from general storage I also use it as a hypervisor to run a few VMs with microservices like my DHCP / DNS / test web server / etc., but, there's no reason you couldn't just install Linux on some old junker machine to accomplish the exact same thing.

1 PC virtual = single point of failure of all.
2 PC virtual = now you got something.
Many single Servers = power, upgrade and maintenance time and cost issues.

What if I just want to play gaymes?

Bump although this thread is dead

Nice. Your like me. Fuck raid. Mirror with Metal. Put in Safe.

Starting to get a little expensive with my Hi-Res DSD addiction. :/

For Linux users who still want to game virtualization can be a nice thing if you add stuff like Looking Glass into the mess.

Docker + Rancher for clustering. You really don't need vm unless you want to play games on it(pci passthrough and stuff. Gpu in docker works bad) or unless you want to run not only linux but multiple os(like linux + open/netbsdfor stuff and windows for gaming). And docker is much more useful because of automation(docker compose and dockerfiles), tons of available images and devops stuff. Manually deploying and installing stuff in 2018 is degeneracy. Also, using VMware instead of kvm equals being degenerate basedboy cuck supporting jewish (((company))) which spreads degeneracy and push subhuman rights.

Recomend me something to deploy os images using pxe. I.e auto installing debian, centos or arch on one of my chinkpads without hassle with installer and then auto configuring it with ansible.