A system based on a single AMD EPYC 7401P processor delivers more cores, more memory capacity, and more PCIe® 3...

>A system based on a single AMD EPYC 7401P processor delivers more cores, more memory capacity, and more PCIe® 3.0 Lanes than a system using 2 Intel Xeon Silver 4114 processors.

Attached: 36C3F4F3-DF7B-4BB5-8658-27324280EAB7.png (636x900, 411K)

>$1100 processor that has the Passmark of a dual E5-2670 system that goes for $600.

Because enterprises buy second-hand servers brainlet

check the price of a Xeon Silver 4114.
Protip: $750.
Each.

>buying EPYC when it has no AVX512 capability, no low-latency external system fabric, no large guaranteed supported infrastructure (especially after that Opteron mess), and no way to perform live VM migrations from existing Intel systems
This is why my company awarded several new contracts to Intel.

Megumin is a nigger! A NIGGER!

This is fucking gold.
Might save this pasta.

>intel continues to have massive security holes and performance hits
this is why several companies didnt award intel with new contracts

Attached: intel_perf.gif (712x795, 151K)

I guess you could this is truly epyc

No, CCP Games are still using Skylake-X and Broadwell-E for the foreseeable future. It's simply not worth the risk to migrate their servers to a new cluster that has untested/unproven hardware. Those security holes did not have that big of an impact on workloads after in-engine optimizations were applied two or three months ago.

EPYC still has no selling point to win over the majority of companies who can not afford downtime. It's far less costly to add more Intel servers and retire unsupported hardware incrementally than it is to do the same with EPYC hardware.

>It's simply not worth the risk to migrate their servers to a new cluster that has untested/unproven hardware.
I hear this kind of line a lot in my work, even when things get so bad that machines are regularly failing and work can't get done. People are so fixated on the idea of maintaining their existing support infrastructure that the actual goal of having well-supported hardware (having things that actually work) becomes secondary to them

>People are so fixated on the idea of maintaining their existing support infrastructure that the actual goal of having well-supported hardware (having things that actually work) becomes secondary to them
It's called TCO, fucknuts
This is why you're not responsible for purchasing and licensing.

>still using means they will renew contracts and buy new intel stuff

Attached: shilltel.jpg (900x1152, 180K)

It's not like they have a choice if they want to maintain their availability rate. They'll face significant downtime (greater than 5 minutes) by transitioning to a new platform that does not support live migration between different hardware vendors, which no uptime-dependent company in their right mind is willing to face.

>he thinks the performance and efficiency improvements over time cant be worth more than even days of downtime
typical intel brainlet

why do you think datacenters throw out old inefficient shit and buy new stuff that costs multiples in double digits more than a fucking game host's downtime to swap server infrastructure?

Larger companies operate test systems seperate from the main cluster(s). They run all the same software they would in a live environment and if it passes the test they then slowly merge it into the main cluster(s) and monitor for issues. If it proves itself they can just keep adding more. Clusters are built to suffer losing or adding additonal hardware without downtime.

Which is exactly why almost all companies whose revenue is directly related to their uptime will NEVER touch AMD, because the day that they have to migrate their existing infrastructure to a non-Intel platform is going to cost them much more in terms of lost revenue than the theoretical reduced TCO of moving to a non-Intel platform. Basic fucking math, really.

"I have never worked in a large IT environment" the post

Any company that is a tech company will know what is worth their while and do cost analysis. People far smarter than you are already doing this. AMD has already formed partnerships with such companies (MS, Baidu etc etc).

you made a claim and i called you out with factual examples. your time to prove me wrong now.

>People far smarter than you are already doing this
Funnily enough, I am one of those people. Evaluating, risk management, and deployment management all systems (hardware, software, or cloud) is literally in my job description.
What facts? The only thing written in your post is a generic assumption of how IT infrastructures are managed. There are no facts or examples.

i'll suck yo cock fo 5 bucks
t. Megumin

How much does Intel pay you when you convince management to buy more Intel servers?
I want to get paid too.

Companies look at the bigger picture. If they can save as much as 10% in the nextseveral years, they will. Money already spent on hardware is sunk money to them. People who don't work in this environment don't understand how this shit works. Being some shitty admin in a server backroom is not worth an opinion.

Intel doesn't pay me shit because we buy all of our equipment from vendors or from OEMs like Dell or Lenovo.
Funny story though: Dell has a Ryzen Pro version of their Latitude laptops. They won't let us get a handful of those models unless we go all in with ordering at least 100+ of those laptops. They want us to buy hundreds of untested laptops in order for us to test the laptop to see if they were worth ordering.

Needless to say, we ordered over a hundred of the Core i7 8th Gens Latitudes this year.

Slap intel on your car and ask for money

Attached: ad.jpg (1440x810, 131K)

>assumption
hmm i wonder how all those old xeons ended up on ebay if companies dont dump their old hardware in favour of new and more efficient stuff

Attached: 1491840184423.jpg (564x663, 50K)

OEM's can be cunts yeah. They probably (definitely) get arm wrestled by Intel too.

>every company does what we do!

Attached: 1511624853041.jpg (1200x1000, 174K)

That's only a fraction of it. A lot of it gets either recycled (goes back to the seller for a small reduction on newer hardware) , sold on large contract auctions (never see's online selling) or dumped in landfill. Ebay etc is the tip of the iceberg.

and all because they still will save more money by going with the new and more efficient hardware even if it means reduced uptime

Probably because a datacenter went out of business. Not even Google and Amazon dumped their flawed Sandy Bridge Xeons all at once because it takes time to size what you need to replace faulty systems, order the replacement, test & deploy the replacement, and then remove the old systems so you can sell it to a recycling company, who then puts their stuff up on eBay for extra cash.

Any company that can not afford any downtime does what we do. Do you know how long it takes to perform a large migration of hundreds of virtual machines from a cluster of 20 servers? You would need to be able to perform a live migration in order to keep your business afloat during the transition, because the cost in lost revenue could be more than what the replacement equipment is worth PER SECOND of downtime.
You can not perform a live migration of VMs from Intel servers to AMD servers. You will absolutely need to take systems offline in order to install EPYC servers, and that will put more load on the remaining infrastructure.

Is buying shiny new platform from a different company with no genuine benefit and will force you to lose revenue in order to deploy them worth jumping ship? No, not in a million years.

The companies buying EPYC right now can afford to buy them because they're EXPANDING their capacity, not replacing them. Most companies do not have that luxury because their business doesn't need to have their server capacity expanded on, only improved over the lifespan of the equipment.

I'm a pleb and even I know how this shit works. I guess it comes with experience and age. All these iToddlers and their 'But Intel already won!' is kinda funny. Also this takes years to happen. AMD won't just suddenly be in every cluster. It might take a few iterations of EPYC to get a foothold. New builds might get there faster of course.

See my comment about new builds. Like you said. You reetire older equipment as it becomes less viable to run.

AMD has been working on securing contracts. The size of the contracts wasn't exactly revealed, but you bet your ass big companies are making test systems with Epyc. Minor investments to warm their way into the superior hardware on paper.

>even if it means reduced uptime
That's almost never the case. The TCO reduction would have to be insane in order for "new and efficient" hardware to be worth purchasing. The vendors would literally need to pay YOU the customer money for the downtime caused by the transition.
And see my post about how it's impossible to perform live migrations between Intel and AMD systems

WTF I'm buying server chips now.
Intelel BTFO.
Gimme 2 of them fampai, how much? 10k? Gimme 4 instead.

Depends on the setup. Software bias aside you would not add AMD to a cluster because Intel shit works only with Intel shit. You would add it as a seperate cluster. Anyone who thinks you can just put AMD hardware next to Intel hardware is a moron.

But there are people who know their shit who will be talking with upper management about this anyhow. Not your typical small company IT guy.

No, it doesn't depend on any "setup" (what the fuck does that even mean? Hardware? Network? Fault tolerance/Load balancing? Application support?)
> Anyone who thinks you can just put AMD hardware next to Intel hardware is a moron.
You either literally have no idea what your talking about or are talking in such vague and overgeneralized concepts that it still makes you look like you have no idea what you're talking about.
>But there are people who know their shit who will be talking with upper management about this anyhow
I'm guessing that you've never worked in a large IT environment before? Who is this "upper management"? The customer? The infrastructure department? The service provider? The vendor management team?
Who? You write as if you've never dealt with an IT department that didn't consist of a handful of guys in a small closet or basement.

>what is sampling
Large tech companies would already have been working on test systems months before launch under NDA anyhow.

Yeah I'm guessing. Large company infrastructure is not my strongest point. Software wise I was referring to applications that rely on specific hardware instruction sets and compiler code to perform well (Intel compiler binaries, AVX etc.). But again I am only guessing from what I know and read over the years. Not first hand experience.

>Not first hand experience.
Don't talk about shit you don't know. I live and breathe this shit at work. They underpay me to do all of the budgeting, testing, contacting, bullshitting, negotiating, cursing, receiving, returning, more cursing, more negotiating, more bullshitting, even more negotiating, pre-deployment testing, deploying, last minute cursing, managing, and finally planning end-of-life replacements. I've been drinking more in this job out of stress than I ever had at college.

Fuck this job is going to kill me

I kinda understand. I was the IT admin for a small company with several offices dotted around the country connected to a WAN for a while. It was horrible and the travel sucked. Management were usless too so I pretty much had to make all the decisions concerning IT and 'do it myself'. I was also learning on the job at the same time which did not help. There was a consultant that would come in occasioanly who had been their go to guy prior to my employment but even he was just stumbling through the mire.