35% better single core performance on Intel

35% better single core performance on Intel.

Attached: Intel-Xeon-9242-AMD-Epyc-64C_15E7E51D4C164015A6C5E9DBF6A424A2.jpg (1360x907, 118K)

Other urls found in this thread:

ir.amd.com/news-releases/news-release-details/amd-announces-next-generation-leadership-products-computex-2019
semiaccurate.com/2019/04/23/a-long-look-at-the-intel-cascade-lake-9200-line/
youtube.com/watch?v=TmtpHWQbBH4
software.intel.com/en-us/articles/intel-cpu-outperforms-nvidia-gpu-on-resnet-50-deep-learning-inference
wccftech.com/intel-replies-to-amds-demo-platinum-9242-based-48-core-2s-beats-amds-64-core-2s/
blogs.nvidia.com/blog/2019/05/21/intel-inference-nvidia-gpus/
intel.com/content/www/us/en/architecture-and-technology/engineering-new-protections-into-hardware.html
twitter.com/SFWRedditVideos

you heard it boys.
Intel is back on track... just stuck on 2017q2

That doesn't mean 35% better performance, brainlet. Stop falling for Intel's skewed bullshit designed to make idiots think they are good. Intel is desperately trying to cling to their glory days and will lie and bullshit to people to keep them believing that they are at all relevant.

Why doesn't it? If 48 of one kind of core can do the same thing, in the same time that 64 of another kind of core can, doesn't that mean that the 48 cores must each be working 64/48=1.33* as fast?

ir.amd.com/news-releases/news-release-details/amd-announces-next-generation-leadership-products-computex-2019
>Testing by AMD Performance Labs as of 5/23.2918 AMD “Zen2” CPU-based system scored an estimated 15% higher than previous generation AMD “Zen” based system using estimated SPECint®_rate_base2006 results.


single thread specint2006 score of xeon 8180 = 51.09, per ghz 51.09/3.8=13.4447
single thread specint2006 score of epyc 7601 = 38.13, per ghz 38.13/3.2=11.9156

115% of 11.9156 = 13.7029

Israel still has higher frequency and therefore wins, again. Yes!!!

>60% higher base clock
>60% higher boost clock
>35% better single core performance
RIP Intel. It was nice knowing you.

One system is drawing 800W, the other around 450W
Guess which are which.

AMD isn't desperate enough to clock their Rome to the moon to compete.
But if Intel wants I'm sure AMD can come out with a 400W Rome SKU, how does 64 cores at 3.3GHz sound?

that doesnt translate to 35% better single core performance you actual retard
the difference in performance is so small that its not worth going with intel
you either go with AMD's epyc or you go with intel to have that 0.1% better performance and regret it later on when you see the electricity bills

How'd you come up with that number then?

semiaccurate.com/2019/04/23/a-long-look-at-the-intel-cascade-lake-9200-line/

Attached: 3C92D893-B9DA-4EBE-B98E-02109D89BF0A.jpg (1022x1967, 1.02M)

i pulled it out of my ass
it actually 3% better but the electricity costs of intel's 14nm isnt worth it

The respective prices of the each chip is like $10,000 vs $3,000 or some such, too, right?

>3%
And that number?

calculate it

Intel announced Cascade Lake X is coming out this fall. Let's see how successful that will be compared to AMD's unannounced Matisse.

Show your work.

*Lets see how intel's unannounced Cascade Lake X is gonna compare to AMD's announced Mattise
Fixed that for you

oy gevalt shut it down

The X series was one of the first things they covered during the Computex presentation.

You forgot
>300% higher power consumption
>99% chance for housefires

Yeah but what is the price and TDP of both
enterprise and researchers won't care too much about single core if all their tasks can be parallelized

$4200 for the AMD, so yes, in terms of price/performance/tdp, AMD is practically breaking a pool cue over Intel's head.

Price isn't that important, TAM and TCO differences for these SKU are night and day

Intel is so rich right now they could just bomb AMD HQ and just pay off the cops.

Intel absolutely blown the fuck out.
Threw my savings into AMD

>muh upfront costs
>muh power bill
The last restort of poorfag amdrones

>*NEW* Throttling doesn't matter!

I think it's funny than the Jews aren't scared of designing and building miniaturized furnaces. Really makes me think.

Corporations and companies dont fucking care about having the best of the best
they care about price to performance in certain price ranges and electricity usage
not even google and microsoft who have hundreds of billions care about intel's xeon cpus simply because they cost way too much and draw way too much power
buying overpriced and inferior products doesnt make you smarter or better
This is the last resort for intel shills
what are you gonna do now that intel has been BTFO in the mainstream desktop market? are you gonna say that performance doesnt matter and its all about the architecture and the balance of cores, clock speed and cache?

>Pricing for this family of processors is not expected to be disclosed. Intel has stated that as they are selling these chips as part of barebones servers to OEMs that they will unlikely partition out the list pricing of the parts, and expect OEMs to cost them appropriately. Given that the new high-end Intel Xeon Platinum 8280L, with 28 cores and support for 4.5 TB of memory, runs just shy of ~$18k, we might see the top Xeon Platinum 9282 be anywhere from $25k to $50k, based on Intel margins, OEM margins, and markup.
Oy fucking vey
I was giving Intel the benefit of the doubt and assuming that they would reduce the price per core at least a little to compete with Rome, but apparently that's giving them way too much credit. Evidently the die sizes are so embarrassingly high that Intel hasn't even announced them, so I'll bet that yields are absolutely abysmal too.

Being generous, the only instance I can conceive (contrive?) where Intel might possibly be a better deal is a case where licensed software is being run that's priced on a ridiculously expensive per-core basis, but that doesn't sound like a viable model to support a $100bn corporation.

Attached: AP7_678x452.jpg (678x502, 42K)

Data centers care about exactly that.

Now post prices and power consumption for each system OP.

delid dis

Attached: 1540209531358.png (552x661, 288K)

Tell me more about this world where companies and institutions have infinite budgets

>include youtube url for reference
>mangle it by converting it to uppercase for typographical reasons
Does anyone know the actual video?

Attached: mio-13.jpg (250x300, 47K)

Nvm, found it: youtube.com/watch?v=TmtpHWQbBH4

Attached: o80teahslzp21.png (582x253, 25K)

>2x 28c xeon 8280: 9.5-10 ns/day
>2x 48c xeon 9242: 19.5-20 ns/day
So basically, they're trying to say that the 9242 is almost exactly twice as fast as the 8280, despite having only 70% more cores and running at a lower frequency, using the same architecture? That doesn't sound exactly right.

Attached: biribiri-14.jpg (220x202, 16K)

>That doesn't sound exactly right.
I guess what could conceivably justify it is if NAMD is a primarily memory-bound workload, since the 9242 has twice the memory channels of the 8280. On the other hand, that would mean that AMD does just as well with 2/3 the memory channels.

It is the superior architecture and some zombieload magic.

>35% better single core performance on Intel.
>96 cores vs 128 cores

Attached: Girls.png (449x401, 490K)

>the only thing that matters is core counts
Jow Forums in a nutshell

Again, it's also running at a lower frequency, and using the same architecture. Please enlighten me what else might matter. The only thing I can think of is .

According to this graphic, AMD can play videos of protein folding faster than Intel can simulate protein folding...?

>but that doesn't sound like a viable model to support a $100bn corporation.
Pretty much IBM's mainframes in a nutshell.

>at least triple the cost and double the power consumption for 1.5% more performance
Imagine intel thinking this is a favourable benchmark in any way

Is this protein folding? Any reason to not do this on a GPU instead of on a CPU?

AMD used AVX2 compiled NAMD, Intel used AVX512.

Without even looking at the test I can guarantee it's using some meme AVX extension. Irrelevant on the servers and supercomputers have gpus for that. Maybe would be worth it for some video converting machine, still depends on the price though.

software.intel.com/en-us/articles/intel-cpu-outperforms-nvidia-gpu-on-resnet-50-deep-learning-inference

I'm not even sure that would help, seeing as how Cascade Lake has 2x512-bit FPUs, whereas Zen 2 has 4x256-bit FPUs, so they should be roughly equal in that way. I guess it's true that Zen's FPUs are a bit less symmetric and a bit behind on FMA specifically, so perhaps.

Still doesn't explain why the 9242 would be twice as fast as the 8280, though, since it too has AVX-512.

Zen 2 can only do 1 256-bit load and 1 256-bit store per cycle though, assuming all they did was widen the the load/store units. It's still good for speculative execution though i'd imagine.

>Still doesn't explain why the 9242 would be twice as fast as the 8280, though, since it too has AVX-512.
If amd only (presumably) compiled it with avx2, the 8180s would never be fed true 512-bit vectors, despite that ability being available.

wccftech.com/intel-replies-to-amds-demo-platinum-9242-based-48-core-2s-beats-amds-64-core-2s/
>The company also tells me that AMD was not using the correct NAMD optimizations during the Computex 2019 demo, which is to be expected considering it is a first party benchmark designed to showcase something in the best light possible and you should always take first party benches with a grain of salt.

I think you're missing the point of the two earlier anons that 9242 and 8280 are both Intel chips.

blogs.nvidia.com/blog/2019/05/21/intel-inference-nvidia-gpus/

Attached: 3994AA6C-5607-418F-AA09-C79CC6B0A4F8.jpg (1282x1461, 794K)

Attached: F2A36ADC-8984-457B-A83C-752CFF039B48.jpg (1290x1765, 643K)

>correct NAMD optimizations
I've seen the exact quote in that article in several others so it seems like a lot of news outlets are quoting Intel press material verbatim without actually understanding what they are talking about.

NAMD is not some kind of instruction-set. It's a software application called NAnoscale Molecular Dynamics. There's no "correct" way to optimize it. Perhaps they mean AMD didn't compile it with Intel's compiler using special flags or something like that.

I'm not sure I understand what the point is. If the NAMD binary doesn't have any avx512 instructions then neither the 9242 nor the 8280 can use them. There isn't some kind of in silico recompiler to merge two 256-bit operations into a 512-bit one.

>Entire servers get hacked because of Zombieload

Yikes

Attached: 1446194965237.jpg (251x242, 16K)

MDS is fixed on the hardware level with cascade lake

intel.com/content/www/us/en/architecture-and-technology/engineering-new-protections-into-hardware.html

>buy another, goy, we promise it won't have the same issues as all our other products for the last 11 years
>it even comes with a 0.5% performance increase!
>yours for only the low low price of $1000

damn that's a lot of nanoseconds! This is a lot considering you have to take picosecond and femptosecond sim timesteps in molecular dynamics
nope it's not folding. Folding occurs over seconds timescales, not nanoseconds. It's just a protein woobling around
GPUs are worthless for molecular dynamics. They can speed some calculations up, but you can't run everything on the GPU

Zen 2 has a more accurate branch predictor now, appearantly. But because of double standards it must be free of any future security problems unlike Intel.

>Price isn't that important
>both take a single motherboard, so you can have the same number of them in the same space
>one costs three times less for the same size and performance server farm

before or after security mitigations?

>800W housefire barely matching 350W part
lmao, typical Intel, I'm pretty sure it's a similar case in the comparison with AMD

From AMD's presentation at computex. Clearly Rome has improved since the original demo. And if you extrapolate the results (as intel is currently doing) it should beat the 56 core cascade lake abomination

Attached: Screenshot_20190530_093745.png (576x346, 96K)

You don't have to redo your branch predictor to improve it, so your post doesn't make any sense, it's just another iteration of an implementation proven to be more secure than Intel's. Anyways, these exploits are caused by problem in speculative execution, it's related to branch predictor, but it's not the branch predictor itself that's broken.

>AMD can only improve it in a way that's completely free of security holes, because double standards
>speculative store buffers aren't part of the branch prediction machinery, because autism
k

>If amd only (presumably) compiled it with avx2
True enough, I guess that could be a reasonable explanation. Though that being said, it is my understanding that NAMD is also very memory-heavy, so the additional memory channels may well be a partial explanation as well.

NAMD is a high latency and memory heavy test, and it runs perfectly on GPUs. being like that it has no problems pulling 100% core resources and scales near perfectly with clockspeed.

Not very difficult to figure out why Intel chose it.