Allright SysAdmins of Jow Forums. Let's settle this one once and for all

Allright SysAdmins of Jow Forums. Let's settle this one once and for all.
Debian NetInstall vs Centos Minimal for servers, which side are you on and why?

My personal preference is Debian: not (((Redhat))), easier networking scripts, stable as rock, less overhead on system resources, respects GNU ideology by default and you don't need to "yum install epel-release" just to install packages like nano or htop.

Attached: 1303449_1.jpg (630x630, 57K)

Its really a matter of which one you have the most experience in.

Found the brainlet

nano penis

Centos/RHEL because whatever org you're working for probably already has a support contract with them.
Also, Amazon Linux is based on it, so if you're in the cloud, you'll probably need to know it.

rhel if asked to
debian when I can choose
centos when it was rhel but they decided to stop paying for this machine

Debian for my infrastructure because it can be updated in-place and I will update it, CentOS for customers because 10 years of support and they never update it.
> ou don't need to "yum install epel-release" just to install packages like nano or htop
Who cares. Sooner or later, orchestration tools start doing all the job.
> Centos/RHEL because whatever org you're working for probably already has a support contract with them.
Haha, I remember that one story about the website for one really big oil megacorp (company lived about three years off the advance payment alone).
> some complex structure, balancers, dbservers, frontend, backend - no Nginx, tho, they don't trust it, Apache FTW.
> Two months in, admin tells "fuck it" and starts deploying everything via Ansible. Smart move BTW.
> Their IT subcompany suddenly comes up with the idea of using SLES for OS
> It was not defined in the contract and that company I worked with was ready to do anything
> OK then, everything is rewired for SLES, I member admin complained how he had to download some packages from shady Chinese repos
> Crunches before the release
> During one of the demo events one of their IT workers asks "what made you choose SLES? We have a RHEL support contract, BTW".

Personally? Debian

Usually? CentOS

People cry when you put Debian on a box and they talk up how much they "know linux" and then shit themselves when yum doesn't work.

It's easier to just avoid the whole conversation if you know both, plus I guess there is something to be said about being able to get redhat support if you ever need it, not that I do but I might not always be here and all that.

Debian, without a doubt.

Ubuntu Server or CentOS. The former if I want things to be easier for whoever is going to interact with the server. The latter if I want things rock stable because I don't plan on upgrading the server for a decade. So Postfix on CentOS for example.

>Ubuntu Server
Get a load of this faggot.

Debian is for niggers, if you're going to use a distro that just copy-pastes whatever Red Hat does you should just go with Ubuntu for better support or CentOS.

>Ubuntu for better support
Copy pasting shit written by Indians on the Ubuntu forums and stackoverflow is not "support".

You aren't even a real sysadmin are you?
You're some help desk jockey who's letting his bulldog mouth overload that puppy dog ass.

>You aren't even a real sysadmin are you?
Most likely more than you are by any metric, whether it's user count, host count, uptime, or infrastructure.

RHEL

I pay for support which costs substantially less than downtime.

Oh, we are gonna try dick waving huh?
Let's see, hmm....
Cloudability, agile development, scalability, dark web, blast processing, megaport

Do I win the bullshit buzzwords contest now?
Faggot.

The fact that those are the buzzwords you think of is how I know you're a junior. Get back to be when you've got people trying to sell you on "converged infrastructure", "X defined Y", "unified computing", or any of the other things you see when you're not a startup webhost.

>be me
>Advanced networking and system integration student
>intern work time to finish degree
>"aight boys, imma gonna setup some servers, maybe even some routers, do trunks and what not"
>we want you do develop an CI/CD pipeline and that inclueds modules tests and chaos tests
>thefuckisthat.jpeg

why is life so shitty bros? i all wanted is to do some server stuff...not this devops bullshit

It's hyper-converged infrastructure, the least you could do is get it right.

Not that it matters since only a retard would use an non-stable distro like ubuntu anyway, have fun with your amazon cancer and snapd aids Mr. "I'm so pro you can suck my nuts".

I hope whoever hired you isn't paying you in real money.

Sounds like you got meme'd into working Cloud DevOps.

>It's hyper-converged infrastructure, the least you could do is get it right.
Lol you retard, hyper-converged infrastructure =/= converged infrastructure. They're two different concepts.

>They're two different concepts.
No shit, and one is actually relevant to this decade.
Nice job.

user you're fucking retarded if you think that organizations don't use converged infrastructure and just hyper-converged. Not that you even know the difference because you literally thought converged infrastructure wasn't a thing a minute ago.

>you literally thought converged infrastructure wasn't a thing a minute ago
Ok

> when you've got people trying to sell you on "converged infrastructure"
>It's hyper-converged infrastructure

When were you an admin, I am dead serious.
I want to know when the last time someone was " trying to sell you on "converged infrastructure", because if it was this decade then we shouldn't even be talking to each other.

>because if it was this decade then we shouldn't even be talking to each other.
You're right, because you're clearly someone that got hired in a junior position this year and hasn't been allowed to touch any hardware.

OK, you're entitled to your opinion, here's mine: You're an out of touch jackass who uses Ubuntu for production and is about a decade behind the industry.

Cool, you're a moron who follows worst practices, opinion discarded.

Have a nice day, watch out for those Gateway salesmen, I hear they are really persuasive.

>the battle of the ages: neckbeard #492638 vs. neckbeard #379473

Attached: 47939FBA9DFE4CA58667117633C03BBA.jpg (540x540, 44K)

Arch + GNU/HURD

I especially like how he never once even tries to defend his use of Ubuntu in production.

>Listen here kid, when you're old like me you'll come to appreciate the taste of Canonical's cock as it massages your tonsils

It's always so funny to watch these people get fired, it usually happens right after a patch window.

centos LMFAO

We are truly in an information society. Now more than ever, moving
vast amounts of information quickly across great distances is one of
our most pressing needs. From small one-person entrepreneurial
efforts, to the largest of corporations, more and more professional
people are discovering that the only way to be successful in the '90s
and beyond is to realize that technology is advancing at a break-neck
pace---and they must somehow keep up. Likewise, researchers from all
corners of the earth are finding that their work thrives in a
networked environment. Immediate access to the work of colleagues
and a ``virtual'' library of millions of volumes and thousands of
papers affords them the ability to encorporate a body of knowledge
heretofore unthinkable. Work groups can now conduct interactive
conferences with each other, paying no heed to physical
location---the possibilities are endless.

You have at your fingertips the ability to talk in ``real-time'' with
someone in Japan, send a 2,000-word short story to a group of people
who will critique it for the sheer pleasure of doing so, see if a
Macintosh sitting in a lab in Canada is turned on, and find out if
someone happens to be sitting in front of their computer (logged on)
in Australia, all inside of thirty minutes. No airline (or tardis,
for that matter) could ever match that travel itinerary.

The largest problem people face when first using a network is
grasping all that's available. Even seasoned users find themselves
surprised when they discover a new service or feature that they'd
never known even existed. Once acquainted with the terminology and
sufficiently comfortable with making occasional mistakes, the
learning process will drastically speed up.

Ok cocksuckers.
I am a collegefag who was given the chance to work on some small projects for several local companies and my sysadmin knowledge isnt that far from "running debian on a virtual machine and googling problems"

Where can i learn more about this stuff?
What is the best distro and way to host it so that it wont bite me in the ass big time?

Getting where you want to go can often be one of the more difficult
aspects of using networks. The variety of ways that places are named
will probably leave a blank stare on your face at first. Don't fret;
there is a method to this apparent madness.

If someone were to ask for a home address, they would probably expect
a street, apartment, city, state, and zip code. That's all the
information the post office needs to deliver mail in a reasonably
speedy fashion. Likewise, computer addresses have a structure to
them. The general form is:

a person's email address on a computer: [email protected]
a computer's name: somewhere.domain

The user portion is usually the person's account name on the
system, though it doesn't have to be. somewhere.domain tells
you the name of a system or location, and what kind of organization it
is. The trailing domain is often one of the following:

com
Usually a company or other commercial institution or organization,
like Convex Computers (convex.com).

edu
An educational institution, e.g. New York University, named nyu.edu.

gov
A government site; for example, NASA is nasa.gov.

mil
A military site, like the Air Force (af.mil).

net
Gateways and other administrative hosts for a network (it does not
mean all of the hosts in a network). {The Matrix, 111. One such
gateway is near.net.}

org
This is a domain reserved for private organizations, who don't
comfortably fit in the other classes of domains. One example is the
Electronic Frontier Foundation named eff.org.

Each country also has its own top-level domain. For example, the
us domain includes each of the fifty states. Other countries
represented with domains include:

au Australia
ca Canada
fr France
uk The United Kingdom. These also have sub-domains of things like
ac.uk for academic sites and co.uk for commercial ones.

FQDN (Fully Qualified Domain Name)

The proper terminology for a site's domain name (somewhere.domain
above) is its Fully Qualified Domain Name (FQDN). It is usually
selected to give a clear indication of the site's organization or
sponsoring agent. For example, the Massachusetts Institute of
Technology's FQDN is mit.edu; similarly, Apple Computer's domain name
is apple.com. While such obvious names are usually the norm, there
are the occasional exceptions that are ambiguous enough to
mislead---like vt.edu, which on first impulse one might surmise is an
educational institution of some sort in Vermont; not so. It's
actually the domain name for Virginia Tech. In most cases it's
relatively easy to glean the meaning of a domain name---such
confusion is far from the norm.

I used to just hobby run a personal server, but I just woke up one day as the sysadmin for a couple of important servers.

How do I not suck?

FreeBSD

Every single machine on the Internet has a unique address, {At least
one address, possibly two or even three---but we won't go into
that.} called its Internet number or IP Address. It's actually a
32-bit number, but is most commonly represented as four numbers
joined by periods (.), like 147.31.254.130. This is sometimes also
called a dotted quad; there are literally thousands of different
possible dotted quads. The ARPAnet (the mother to today's Internet)
originally only had the capacity to have up to 256 systems on it
because of the way each system was addressed. In the early eighties,
it became clear that things would fast outgrow such a small limit;
the 32-bit addressing method was born, freeing thousands of host
numbers.

Each piece of an Internet address (like 192) is called an ``octet,''
representing one of four sets of eight bits. The first two or three
pieces (e.g. 192.55.239) represent the network that a system is on,
called its subnet. For example, all of the computers for Wesleyan
University are in the subnet 129.133. They can have numbers like
129.133.10.10, 129.133.230.19, up to 65 thousand possible
combinations (possible computers).

IP addresses and domain names aren't assigned arbitrarily---that
would lead to unbelievable confusion. An application must be filed
with the Network Information Center (NIC), either electronically (to
[email protected]) or via regular mail.

Ok, computers can be referred to by either their FQDN or their
Internet address. How can one user be expected to remember them all?

They aren't. The Internet is designed so that one can use either
method. Since humans find it much more natural to deal with words
than numbers in most cases, the FQDN for each host is mapped to its
Internet number. Each domain is served by a computer within that
domain, which provides all of the necessary information to go from a
domain name to an IP address, and vice-versa. For example, when
someone refers to foosun.bar.com, the resolver knows that it should
ask the system foovax.bar.com about systems in bar.com. It asks what
Internet address foosun.bar.com has; if the name foosun.bar.com
really exists, foovax will send back its number. All of this
``magic'' happens behind the scenes.

Rarely will a user have to remember the Internet number of a site
(although often you'll catch yourself remembering an apparently
obscure number, simply because you've accessed the system
frequently). However, you will remember a substantial number of
FQDNs. It will eventually reach a point when you are able to make a
reasonably accurate guess at what domain name a certain college,
university, or company might have, given just their name.

The Internet is a large ``network of networks.'' There is no
one network known as The Internet; rather, regional nets like SuraNet,
PrepNet, NearNet, et al., are all inter-connected
(nay, ``inter-networked'') together into one great living thing,
communicating at amazing speeds with the TCP/IP protocol. All
activity takes place in ``real-time.''

The UUCP network is a loose association of systems all communicating
with the UUCP protocol. (UUCP stands for `Unix-to-Unix Copy
Program'.) It's based on two systems connecting to each other at
specified intervals, called polling, and executing any work
scheduled for either of them. Historically most UUCP was done with
Unix equipment, although the software's since been implemented on
other platforms (e.g. VMS). For example, the system oregano
polls the system basil once every two hours. If there's any
mail waiting for oregano, basil will send it at that time;
likewise, oregano will at that time send any jobs waiting for
basil.

BITNET (the ``Because It's Time Network'') is comprised of systems
connected by point-to-point links, all running the NJE protocol.
It's continued to grow, but has found itself suffering at the hands of
the falling costs of Internet connections. Also, a number of mail
gateways are in place to reach users on other networks.

The actual connections between the various networks take a variety of
forms. The most prevalent for Internet links are 56k leased lines
(dedicated telephone lines carrying 56kilobit-per-second connections)
and T1 links (special phone lines with 1Mbps connections). Also
installed are T3 links, acting as backbones between major locations
to carry a massive 45Mbps load of traffic.

These links are paid for by each institution to a local carrier (for
example, Bell Atlantic owns PrepNet, the main provider in
Pennsylvania). Also available are SLIP connections, which carry
Internet traffic (packets) over high-speed modems.

UUCP links are made with modems (for the most part), that run from
1200 baud all the way up to as high as 38.4Kbps. As was mentioned in
The Networks, the connections are of the store-and-forward
variety. Also in use are Internet-based UUCP links (as if things
weren't already confusing enough!). The systems do their UUCP traffic
over TCP/IP connections, which give the UUCP-based network some
blindingly fast ``hops,'' resulting in better connectivity for the
network as a whole. UUCP connections first became popular in the
1970's, and have remained in wide-spread use ever since. Only with
UUCP can Joe Smith correspond with someone across the country or
around the world, for the price of a local telephone call.

BITNET links mostly take the form of 9600bps modems connected from site
to site. Often places have three or more links going; the majority,
however, look to ``upstream'' sites for their sole link to the network.

>googling problems

Thats pretty much just about every technical job in existence.

I tend to prefer Debian because it's simple(er?) for Linux.

If I'm going to use Unix I prefer Solaris (pre-Oracle) or Irix.

Depends, when I'm working with a more """classic""" stack (Apache/PHP or Java) I tend to use CentOS but when I use """newer""" stuff (Flask/Django, NodeJS) I use either Ubuntu LTS or Debian.

i would say Debian. its less boated, than Ubuntu the nigger OS. You only need like 2 repos unlike ubuntu has like 10.... Centos is obviously king when it comes to infrastructure but shut like epel is fucking annoying. Centos is easier to break in other words than debian...

however.. openSUSE is pretty damn good to sysadmin because of YAST, quick fast config when you need it. it may be a bit slower than the other OS's but its built for fucking sysadmins. i love it. Sure you need to know how to configure shit manually but yast just makes it faster when under pressure or lazy

>Debian NetInstall vs Centos Minimal
This is an argument of situation, not use. Both can do everything the other can... but lets suppose you're following some sort of industry standard and your clients/peers are all using redhat, know redhat and expect redhat. If that's the case then the best choice going forward would probably be Centos.
For personal projects and servers no one else will ever see, Debian's my choice. Small, quick, simple and it never breaks.

Attached: 1458157307547.gif (420x315, 456K)

If you are doing any kind of real work with real customers you run Redhat so you can deflect any bullshit to them and save your SLA's. This is the reality in the real world where things don't run based on what has the least jewish tricks

Your buzzwords are just as generic if not more so you stupid faggot

>converged infra is not relevant in current year
As in UCSM? In what world?

Why (((Red Hat)))?.
Why is Debian for niggers?.

Attached: howsee.jpg (960x960, 341K)

also,
>Debian, not Redhat
my uni only supports Centos

bonus: i told my sysadmin i use arch and he visibly cringed
but he smiled when i said it was total shit and i'm changing distro as soon as i finish this next paper

The CentOS/RHEL scheme of backporting shit to 3.10 is really getting on my nerves. It's a pain in the ass to install shit from outside the regular channels. I use Debian on my media server at home.

Fedora Atomic. Come at me Jow Forums.

gentoo with i3.