Intel are done for

Brutal.
semiaccurate.com/2018/11/09/amds-rome-is-indeed-a-monster/

Attached: AMD_Rome_closeup[1].jpg (1124x1076, 201K)

Other urls found in this thread:

youtube.com/watch?v=DaUy880vtRM
twitter.com/NSFWRedditVideo

>intel has no chance on servers
unless amd magically convinced 90% of the companies worldwide to throw out a perfectly working servers and gimp themselves on shitty single blade early adopter amd servers

Nobody said it would be instant. But upgrading EoL Intel servers and adding additional grunt or new builds and new startups will be eyeing EPYC.

AMD will most definitely take a substantial bite out of Intels market share.

Charlie Demerjian writes like a pajeet, the article is barely readable.

Long term, intel’s only chance is software per core licenses.

>AWS,Google,Chinks Baidu (or whatever the fuck),Orcle all adopt epyc
>AHHAHA NOBODY IS ADOPTING SINGLE RACK FOR TWICE THE POWER LMAO

>to throw out a perfectly working servers and gimp themselves on shitty single blade early adopter amd servers
That's exactly what Super 8 will do.

It's all about new servers, nobody cares about existing servers. When this 90% are going to upgrade do you think they are going to go to Intel or AMD?

Charlie also confirmed 8c CCX.

He's wrong, as usual lol

Pretty sure google wasn't on slides, it's azure.

>youtube.com/watch?v=DaUy880vtRM
starts from 9:00

Poor kike.

>someone found a CPU from ~753 BC and think it's going to bear a CPU from 2018
lmao are you retarded

did you see how charlie looks? he looks poor and fat and stupid, no wonder he likes amd lol

Intel

I was wandering why zen adoption on servers was rather poor, but then it hit me, that they were probably also waitfaging for 7nm like the rest of us...

*o

Attached: epyc suicide.png (1070x601, 816K)

>the whole stagnation of performance in the cpu market was caused by intel being unrivaled
>amd has a really good plattform now
>people want to see intel dead now creating another era of stagnation because this time amd is unrivaled

I don't get it.

Zen 2 will be 14nm IO plus 8 or 16 core 7nm chiplet design. Zen 3 will be 12nm IO and 7nm chiplets. Zen 4 will be 7nm IO and 5nm chiplets.

You are truly a shallow NPC.

It's a running gag about Jews. We want competition not a monopoly. If Intel and AMD held about 50% market share imagine the savings both consumers and professionals could make on buying new hardware.

Anyhow Intel can't even produce a cost effective CPU for the desktop right now so it's moot.

>no AVX-512
into the trash

>If Intel and AMD held about 50% market share imagine the savings both consumers and professionals could make on buying new hardware.


Competition means faster advancement in consumer technology. While this sounds good this basically means that you'll have to upgrade your hardware more often, thus spending more money.

The stalled cpu market had its charm, sure you had high initial costs to get a good intel cpu but you could keep this cpu for 5-6 years without feeling any restrictions.

EPYC does not need it. It uses other methods in floating point to get similar results.

>Advancing technology/speed is bad
Only if you live in a Philip K Dick book. Sensible Non-NPC's upgrade when they feel the need to upgrade. For everybody else there's Applel.

It makes me sick

Milan.
You can also enjoy 350W -AP housefires from Intel.

Oh sweet summerchild. You seem to not have been around when your hardware was outdated the moment you've bought it.

I was. I'm 53. I remember Athlons and Cyrix. I upgraded when I needed to. I just ignore the tech wars during those periods. A new line needs at least a 50% uplift in certain loads over the previous tech before I consider upgrading. For loads that require lots of threading I am already beginning to see that over my current build with a 4770K. Not so much in single threading but then I don't game much. Zen 2 or perhaps Zen 2+ might be the tipping point. I shall watch this space.

>i7 7700k

>what are smartphones?

Smartphones have become a keeping up with the Jones's thing. Apple thrives on that.

>intel will die in your lifetime

Attached: 1540134880758.jpg (1588x952, 237K)

What with those SATA-cables

>90% of the companies worldwide to throw out a perfectly working servers
you know they do it every year right? or at least every two years.

i want AMD gf

Attached: 1539391622864.jpg (757x627, 87K)

>Intel has no chance in servers
>The competence can't compete.

INTEL IS FINISHED

I know if you got an enterprise machine you'd hold onto it for over a decade, but a lot of companies turn over their hardware pretty quick, sometimes it's left up to regional or the department specific quotas and scheduling, but it's typically every 3-5 years and for some it's every tick and tock.

that's why you'll see xeons and ECC flood the used market, as always it starts off cheap then supply and demand does it's thing.

if intel dies x86 dies too, and that's a good thing

>Charlie also confirmed 8c CCX.
Because no one is ever wrong on the internet.

you know, risc-v is awesome beyond it's praise, but it's no silver bullet and has it's own flaws.

>>>/reddit/

NONONONONO

Attached: 1487297768743.jpg (960x878, 125K)

THAT'S how ryzena looks?

The only thing awesome about it is being a truly open architecture. The design itself is a rewarmed 80's RISC.

>smartphone
>new tech

every year it's the same shit, there has been no major advancements in smartphones over the last 3 years. an S7 is just as capable as an S9, an iPhone6 is just as capable as an iPhoneXR, a pixel is just as good as a pixel3

>loyalty to a brand
I never understood that. You've got to be fucking retarded.

Remains to be seen for products that far out in the future. The master I/O die setup may simply be the most economical choice at this time to release a product on time while being cost and supply effective.
It is always more performance effective to put as much processing on a single die as possible. I expect this design style to be a one-off approach (excluding a "Zen2+") and AMD will go back to putting more things on a single die when 7nm matures/EUV is used for more and more layers.

>AVX-512
Barely works on Intel's designs, provides single-digit performance benefit due to thermal density limitations.
I called native AVX-256 last year and dozens of shills shit on me. It was the -lowest- hanging fruit for the entire market, from games to hobbyist to servers to HPC.

Zen's design requires every core in a CCX to be linked to every slice of L3$. The level 3 is also the data crossbar for the entire CCX. The cores have (cycle delay) native priority to the closest L3 slice simply because of distance, but every L3 slice must also be linked to every core.

An 8 core CCX would require the L3 cache to be 4 times as complex. This is both an insane engineering feat and a power hungry mess of routed wires rife for mistakes/hardware bugs/litho defects.

So either we do not have native 8 core CCXs or Zen2 has a novel, new redesigned level 3 cache.
Which one makes more sense?

I am kinda doubtfull with Charlie and Semiaccurate as a source.
From what I remember is that he was off limits on certain events as he is known to be on AMDs paylist and apprently has more than once spread false information to discredit AMDs competition.
If I remember correctly he was the one that started the woodscrew meme.

>and AMD will go back to putting more things on a single die when 7nm matures/EUV is used for more and more layers.
Why would they do that when cost per mm^2 only goes up going forward?
>Zen's design requires every core in a CCX to be linked to every slice of L3$.
That's not a requirement, but a design choice.

I don't get it, the market for datacenter servers is growing at a ridiculous rate and people are talking about Intel/AMD competition as if the market is stagnant or shrinking? Is there something I'm missing or are people just shilling for the sake of it?

hmmmm....let's see. I buy a product and like it. I want the company to keep making good products, so I support them by buying their products and not their competitor's. If I don't support them by buying their products, then they go out of business and I'm left with only their competitor as an option, which sucks. It's pretty easy to understand.

or maybe it's disloyalty of a brand that has proven anticompetitive practices to keep its mature and nonfunctional product from leaving relevance.

AMD managed to put 2X the power compared to intel on a single socket.
Basically intel needs dual socket of their best CPUs to be even comparable to 1(one) AMD CPU.

Thats a huge deal.

A single Epyc 2 was 10%~ faster than two top of the line $15,000 Xeons, and its still likely just a 180w chip. So AMD did it had half the power consumption of the intel platform too.

>CPU makers are lifestyle brands, and I choose which CPU to buy based on which brand identity I want to associate myself with
(You)

Attached: npcmeme.jpg (1400x1400, 211K)

Attached: 1539106095263.png (992x1043, 614K)

Correction. It was no more than 7.5% faster. On a prototype. We do not yet know what the speed will be nor the power consumption. However based upon what we have seen it will most likely be 10% faster on the final product. It may even surprise us by being even faster although I doubt it.
Cascade Fail is shaping up to be a joke and is Intel bluster trying to claw back its customers on hopes and dreams.

>Built dual Xeon Harpertown 45nm 8-core beast in 2008
>Got 10 years out of it and upgraded to TR1950x when current AAA titles started lagging

I'm probably going to step up to the 2990WX and that will be the final step for my 10-year build. It is absurd how much power the 1950x has in mutli-core applications it just plows through Photoshop and Lightroom effortlessly. I can stack as many layers as I want and run all the programs I want simultaneously and barely touch 50% resource usage.

>buy Slot A Athlon 1000 brand new in 2000
>Palomino core with double L2 drops literally two months later and AMD drops further dev of the Slot A platform

reeeeeee

>a-monster/
waiting for the boomers to show up

EUV reduces mask count and production complexity. 7nm is expensive as all hell this very moment. I was responding to an user talking about Zen4. Get your head straight.

>but a design choice
The L3 would need to be radically different, else the way the cores access the L3 would need to be radically different.
You're suggesting AMD has thrown away the entire philosophy responsible for their staggering comeback,
I ain't buying it.

>if intel dies x86 dies too
No the goverment will mandate it becomes open source or less likely, AMD becomes the sole owner of it and chooses a second source just like intel did almost 35 years ago

480 prototypes were held together with woodscrews and were just for demo, thats not a meme

DELID DIS

Attached: 1489122339460.png (1070x601, 495K)

>Why would they do that when cost per mm^2 only goes up going forward?
I don't think that it will happen with high core count CPUs, but for desktop and laptop processors with up to 8 cores.
Instead of having a chiptlet and a small IO chip it could make sense to have them on the same die (less manufactoring steps, smaller area for laptop APUs). On the other hand, AMD can't reuse rejected server chiplets in this way.

Don't forget that chip was a prototype.

Those companies are always, ALWAYS expanding their shit, so while they won't just dump away their working servers, the next ones to roll in to the room will not be intel boxes.

lol goodnight Intelfags.

my friend that works a the New York google office say they're fielding AMDs for their farms

I said, he said performance was a key, intel doesnt have a present solution to grab the server market.

*sip*

Fuck you goy you'll pay for this

If Google are in then Intel are in trouble.

Wrong... Google already has data on Sapphire Rapids and Copper Lake, which are new innovative server architectures far more powerful than Skylake

Not really.
The other platforms sadly don't have an "open enough" computer platform tied to em.
To sell a x86 CPU, it must be inside an IBM PC clone with the legacy boot, with the standard VGA/Vesa/Keyboard/mouse stuff.
ARM/MIPS/RISC-V etc? they can be inside a completely proprietary incompatible unusable hell that the NSA-OS is the only one to boot on it.
This is why for example, you can install linux on almost any x86 box, while most arm platforms can't get even close to doing this.

AMD is going to give Amazon some epyc service, that's a fuckload of money right there.

yea and AMD has a contract with microsoft to provide CPUs to some of microsoft's new datacenters. woo hoo!

>I said, he said
what did he mean by this

Charlie is always hyperbolic as fuck, but he was dead-on with this story (albeit behind a paywall) many, many months ago.

And Intel is rather fucked for the next 1.5 years at least for high core count new server sales. Cascade Lake AP is a weird offering that looks like a dual socket platform but will be functionally more like an overpriced but gimped quad socket. In order to get 12 DDR channels per socket, this is going to need a new socket and new motherboard PCBs with extremely high pin and trace densities that will be harder to manufacture than 4S systems in a lot of ways, and Intel is very unlikely to price these things so low that the 4S Xeon Gold/Plat part sales get cannibalized.

I believe that the 14nm shortage from Intel stems from them having found out six months ago about Rome being up to 64c and then diverting all production capacity possible to Cascade Lake XCC (28c @ ~690 mm^2) for stockpiling, not because people were constantly buying out everything that is being fabbed.

>Charlie also confirmed 8c CCX.

>He's wrong, as usual lol

Did the presentation go either way on this point? 8c CCXs (vs. 2 * 4c) makes a lot more sense with the architecture presented thus far, since it would allow the 7nm compute dies to be completely free of coherence directories.

He was also dead on about 10nm, AMD's GPU team state ever since Kepler, early Naples info about 4 dies from recent memory.

>it's not 2.1x faster than single socket Intels, that's best case!
>muh 4x throughput (c-ray does't use the new avx lanes, or avx at all)

t. Intel fanboy on twitter trying to present himself as neutral not seeing the forest for the trees(as if 60% worst case faster than a Intel's single socket solution is less humiliating)

Meanwhile you can drop an AMD EPYC 64 straight into a Naples motherboard. You won't get every performance increase of course but you still save a bucket of money.

Funnier thing is that AMD will be on Milan by the time Intel comes up with Icelake Server, which still won't touch Rome, much less Milan since it's 48 cores.

You could, but I don't understand who would actually do this. The enterprise market who wants 2*64c boxes will also want mobos rated for higher clocked ECC DDR4 and PCIe 4.0 for new NICs and SSDs. It is unlikely that somebody build a bunch of Naples stuff then subsequently decided that they actually needed AVX throughput or something.

It seems to me that Rome is about expanding the market for AMD more than just serving existing Naples workload customers better.

We're already here *crack* *sip*

Someone will appreciate the backwards compatibility, if it let's AMD keep a customer that's good enough for AMD

AMDrones pretend they're hot shit when they were irrelevant for the past 10 years and their recent success is them piggybacking off TSMC's superior fabrication which was the result of Apple and Nvidia dumping money into them for years before they even bothered to resurface in the CPU space again.

AMD's irrelevance is what placed Intel in a state of deep slumber but AMDrones want you to believe that OEM and vendor bribing is what killed AMD, not because of the fact that the Core2Duo, Core2Quad and the first two generations of i7s were literally superior to whatever AMD coughed up during that era.

But Zen was made on Glofo you kike

Nobody buys high speed ecc, because there isn't any.
Stop assuming things for hpc based on your gaymen insticts.
Also, nobody is going to rush pcie4.0 into their servers, just because gaymen say so. I've seen upgrades to infrastructures that couldn't tax pcie3.0, and even then the cpu overhead was huge and not worth it.

>Just Waitâ„¢
all of my keks

Attached: 1541538120178_0.jpg (818x693, 91K)

AMD got fucked mostly by not being able to grab Opteron marketshare. Xeons were strictly inferior until Nehalem and not distinctly better until Sandy Bridge.

>heh your superior products are no match for our plodding mediocrity

It's a bold strategy Cotton let's see how it plays out for them

Attached: 1499957247669.png (653x726, 84K)

High speed DDR4 ECC is finally coming out, and it will be highly desirable on 48+c platforms to prevent starvation. 100/200 GbE PCIe 4.0 NICs have already been selling this whole year in anticipation of servers getting support later, and it is not hard to bottleneck enterprise u.2 SSDs on x4 PCIe 3.0 connections.

Just because your shop doesn't need more I/O and memory bandwidth doesn't mean that plenty of other customers don't want it.

Nigga, the Phenom II X4 and X6 went toe to toe with the newer i5 and i7s at half the price

>Wrong... Google already has data on Sapphire Rapids and Copper Lake, which are new innovative server architectures far more powerful than Skylake

It's Cooper Lake, dummy. And no, even if the people who got hired this summer to start designing Cooper Lake don't fuck up, it is unlikely that they get anything out the door before 2021. Ice Lake, Sapphire Rapids, and beyond are even more speculative and entirely contingent on Intel somehow getting their 10 nm fabrication back on track.

>H is for Honorable company
>H is for Higher IPC
>H is for Hitler
>H is for Holocaust part deux

>Just because your shop doesn't need more I/O
The biggest particle accelerator is now a shop....
Cern's readout still has opterons and Xeons with mellanox, nallatech and custom cards(where I worked on) on 40gbps.
There's a limit in every readout, and that's the cpu overhead of pcie. Cpu is saturated long before you run out of pcie or ram b/w.
Have you ever seen how much cpu horsepower is required only for 4 cards (10gbps*12 channels*4 ports per card)
They'll gladly upgrade those xeons, but they need to change the motherboards too.
If they had bought epycs on 2017, they would be able to just upgrade the cpus on ls2(long shutdown 2) on 2020.
Look how mothefucking convinient is to just upgrade your cpu capabilities and add functionality(e.g. add data processing capabilities to your DAQ inftastructure), but by your gaymen logic, they should upgrade because of the pcie4.0 meme or higher ecc freq...which is out of spec idiot and nobody is going to put its signature that that system will never have problems while running outside of jedec.

>H is for Huge (but we still can get away with it due chiplets)