THREADRIPPER 2990WX CINEBENCH SCORE

Test setup:
AMD:

AMD Ryzen ™ Threadripper ™ 2990WX
sTR4 motherboard socket X399
GeForce GTX 1080 graphics card (driver 24.21.13.9793)
4 x 8 GB DDR4-3200
Windows 10 x64 Pro (RS3)
Samsung 850 Pro SSD
score: 5,099

Intel:

Gigabyte X299 AORUS Gaming 9
Intel Core i9-7980XE
GeForce GTX 1080 graphics card (driver 24.21.13.9793)
4 x 8 GB DDR4-3200
Windows 10 x64 Pro (RS3)
Samsung 850 Pro SSD
score: 3,335

guru3d.com/news-story/whoops-amd-had-ryzen-threadripper-2990wx-cinebench-score-online,2.html

Attached: kikeripper.jpg (1166x925, 174K)

Other urls found in this thread:

scan.co.uk/products/intel-xeon-platinum-8176-s3647-skylake-sp-28-cores-56-threads-21ghz-28ghz-turbo-385mb-cache-165w-ret
twitter.com/SFWRedditImages

Underwhelming

That seems kind of lower than expected, doesn't it?
It has 77% more cores and a higher base clock but only 52% higher score.

But they cost the same though.

Yeah, maybe it was running with low clocks
>Gasming
kek

Attached: Screenshot_20180805_184702.png (1142x175, 67K)

Also, I don't think CB scales that well. Intel could only pull off 7K+ on it because 5GHz housefire.

Not only do they cost the same, they also have roughly the same power draw

>double the cores
>same power draw
>can be air cooled
INTEL IS FINISHED AND BANKRUPT

That's a pretty brutal score. For comparison, the top of CPU monkey's CB MC ranking chart.

Attached: cpu-monkey-cb-r15-mc.jpg (1920x1080, 238K)

more threads/cores dont scale perfectly. throwing 2x as many cores at a problem won't give you 2x performance unless the problem scales linearly with the number of cores. also, there are other variables like instructions per clock that can mean a major difference between AMD vs. Intel core for core

>It has 77% more cores and a higher base clock
But the same all-core turbo of 3.4GHz out of the box, so unless you're thermal throttling, the base clock doesn't really matter.

>score: 5,099

I get 5-600 with my 4670K

This is gonna be a nice upgrade famalam

Attached: 1422633703212.jpg (324x282, 30K)

NOOOOOOOOO JUST YOU WAIT FOR INTEL 34 CORES YOU ANTI-SEMITIC AMD FANBOY

Attached: 1506977173618.jpg (882x758, 324K)

delid

A FUCKING CHILLER

at this point an overclocked threadripper will do a better job cooling down an intel chip than a chiller

I wonder how it scales with memory clocks.

intel BTFO with no recovery

B T F O
T
F
O

AMD Ryzen Threadripper 2990WX
32 Cores (100% of 32)

Intel Core i9-7980XE
18 Cores (56.25% of 32)

100% - 56.25% = 43.75%

So... for 43.75% more cores (and is less expensive) you're receiving 53% more performance

It would STILL be slower, LOOOOL

cinebench does scale really well though, thats the point of cinebench.

previous leaked score was 5700 wasnt it?

Correction, 78% more cores for 53% performance.

you're hardly ever going to get 1:1 increase in cores to increase in performance.

I know, I'm just fixing his math. We're comparing an 18 core CPU to a 32 core CPU, singlethreaded is irrelevant.

Lower than de8auers 7601 QC results? with what I would assume higher clocks?

rely meks me fink

>cinebench does scale really well though, thats the point of cinebench.
It breaks on really high core counts.

Sure those aren't just NUMA faults? I know a lot of the mega-core CPUs have like 4 NUMA nodes on the CPU, and then dual-socket motherboards on top of that.

Cinebench is pretty much NUMA-insensetive, see 1950X results.
It just breaks on really high core counts, see STH 4x8180 results.

He got 5750 with quad-channel RAM and an overclocked EPYC at 3.8GHz. Stock clocks on 2990WX are lower (only 3.0GHz base, not sure if/how XFR/PBO works on TR2)

Maybe its simply because there's not enough work for that many threads. Its plenty for 8/16, but 224 threads is something else entirely.

Which is the point.
Also I don't know the actual 32C boost clocks for 2990WX.

5700 at 3.8 with slower ram means this must have been running, what 3.2-3.4 all core?

for a 3.0base 4.2 boost cpu, that XFR range is shit it's it's like tr1

all core turbo is 3.4GHz on the 2990WX

You can achieve agressive all-core boosts in 250W TDP envelope.

*can't

Who is this CPU even marketed to exactly?

Attached: main3.jpg (800x549, 126K)

Cinebench begins to scale negatively after 20 cores so someone said. Anyhow there are other performance tests out there. I expect it to show it's stuff when officially launched . The 4.3Ghz boost clocks on single/double threads will help on those apps that use them. A bit of tweaking may even see an all core boost up to 4Ghz (it will run hot and thirsty though).

Anyone who needs the extra grunt in 'non-gaming' applications that don't rely heavily on single core clock speeds.

"profeccionals"

Gentoo users. AMD wants them to keep using Gentoo but the 32-core TR allows them to have a life as well.

enthusiasts, production folks, etc.

I just picked out an analogy from someones comment there.
(My washing machine can clean my clothes faster than yours without using as much power.)

I'll add.
(My washing machine can clean MORE of my clothes at the same time than yours for a little more power usage but cost less to purchase.)

Anyone who needs to run more than 2 visual studio sessions at once.

Cinebench is a joke. The image is rendered before the cores have had to spin up to full speed. They need a beter renderer.

>had time to sping up to full speed

ah fuck it. spin up across all threads even.

I've been wanting to install gentoo, but I made a decision to wait till I can afford a threadripper.

Linus Sebastian.

Currently running fx8120@4GHz.

>Correction
>Intel can't even do math.
Explains why AMD is kicking your ass.

Attached: Nl8Wj9y.png (635x523, 52K)

Now compare with Xeon

Are you retarded?

Enthusiasts and "professionals".

Okay.
See

don't worry about it

b-but muh industrial-tier cooling...

Intel is pretty much fucked if true

I'll buy one and just play league of legends

You can shove it up your ass for all AMD cares as long as you are buying they win.

For people asking "who would buy this" workstation users have been asking AMD and intel to release proper CPUs for workstations, all their server chips are clocked lowly to save power in server racks and the motherboard options are lousy. 32 core threadripper is probably about as close to an ideal solution as we will get.

yeah, but the 5GHz 28-core Intel got 7,334. AMD BTFO.

Yeah about that.

scan.co.uk/products/intel-xeon-platinum-8176-s3647-skylake-sp-28-cores-56-threads-21ghz-28ghz-turbo-385mb-cache-165w-ret

Attached: Muh-5Ghz.jpg (1412x1412, 139K)

>leaked intel cpu shows it beats amd
Fake and saged
>leaked amd cpu shows it beats intel
OH YEAH LET ME SUCK AMD COCKS AND POST THE CRYING INTEL MEMES.

I mean i get amd is better these day. But jesus i fear to look at you sometimes guys.

S-Stop staring at my t-threads you disgusting Jew.

And all it took was a nuclear reactor to power the thing and a cooling solution colder than Satan's taint to keep it from igniting the atmosphere.

78% more cores:
(32 - 18)/18 (percent change formula)
53% more performance:
(5099 - 3335)/3335
sure, but things like OS overhead exist + CPUs always run best single-threaded because of power usage, need for locks to keep threads from interfering, etc. also, it's hard to utilize cores efficiently because the OS may not be willing to hand over 100% of them

look at other CPUs on there, e.g. 8180 vs. 8168:
17% more cores/1% less clock speed (but higher turbos) and ~9% higher performance. it's not a totally linear thing, which is why in terms of raw performance clock speed is always better (i.e. 2x cores means MAX of 2x performance with perfect utilization, while 2x clock speed means average of 2x performance with any utilization discounting possible bottlenecks)

Yeah, certainly not Windows fault that needs like 59k cycles to spawn a new thread, BSD and Linux do it 10 times faster

Blender is better, but the malloc overhead on Windows makes it pointless

People are tired of being ripped off by Intlel. Deal with it jew.

Time for wine.

Attached: 1509739685833.png (772x598, 674K)

>lower scores than expected
>still beat all Intel processors
holy fuck intel is truly dying

Lower clock speed boost(3.4ghz). Ofcourse you can manually set it to 3.5Ghz or 4Ghz with overclocks and proper cooling.

Something that has 3.3 times the power density of a fucking nuclear reactor isn't practical. Unless you're an arsonist.

>50% faster at 30% more power

How is this bad. These are clocked low so they still got a lot of room for OC if you want to burn down your house.

do the math the other way it will look more in AMDs favor (ie % LESS cores)

>5099

Bullshit, but if true holy shit Intel is dead

Ah, that processor thats on sale in a parallel universe where Intel did manage to not create a house fire process that doesn't work

It's for professionals on a lower tier that aren't big enough to justify the highest end stuff. Most VFX/CG freelancers and very small studios are far more reasonable to use Threadrippers and 1080Tis over Epycs and Quadros simply because unit cost is far too high and it just doesn't matter for small work. A Hollywood studio needs 256 cores or 20 GPUs with ECC memory, a dude at home doing $800 contracts does not.

It also has an interesting capability to be very versatile for a niche market of 3D hobbyists. Thanks to its very respectable 4ghz+ 4 core turbo a person can feasibly have a PC that they both game on and do 3D hobby projects on for a very reasonable price, while Epycs and Xeons are way too expensive for a hobbyist and the low clocks would hurt entertainment uses like gaming.

>BY ROYAL APPOINTMENT
>By Appointment To Her Majesty The Queen High Performance Personal Computers & IT Hardware Scan Computers International Ltd Bolton

wtf?

Me

>gaming on a latency monster known as Zen, especially with 4 dies

It does not hinder performance at all for your average player, maybe if you're playing CSGO at a tournament level but otherwise, no.

Unless you're some faggot that desperately needs muh extra 5 fps in Quake at 240p, games run perfectly fine on Ryzen

Companies get a royal warrant if they supply the royal family estate with products for a certain number of years.
So in other words the servers sitting at buckingham palace were bought from these guys.

Neat, cool badge to put on your website.

Oh buhu now I can only get 128fps instead of 131

>what is thread affinity

Even Intel shit benefits

how do i know you game on a 1070?

This. For Intel you wanna allocate the threads closest to the memory controller, this will give you best latency for gaming.

For AMD you wanna allocate a single die.

Jesus fuck people learn how your fucking hardware works

In this case the 2 left columns should be used for gaming, lowest inter core and memory latency, 3 channels(2 for skulake-x) is more than enough

Attached: Skylake-SP-28-Core-Die-Mesh.jpg (922x768, 178K)

And in case of old games just allocate a max of 4 cores on both AMD and Intel, that way you're avoiding any engine compatibility horseshit and CCX talk

2 CCXs (8 cores)

Attached: 2ccx.png (594x94, 8K)

continued

Attached: 1ccx.png (602x95, 8K)

50213 interrupts after 10 seconds of running, we will divide this by 10 to get the interrupts per second which is 5021.3, now we multiply by the interrupt to process latency which is 2.512253, resulting in 12614.7759889 microseconds, or ~12.61 milliseconds, compared to ~2.77 milliseconds on one CCX.

KIKETEL BTFOREVER

I don't get it

Zen CPUs vastly contribute to input lag.

Do they?

If I had one I could run discord, spotify and slack at the same time instead of having one computer for each electron app I'm using.

Based and redpilled

Attached: 1413913323099.png (1000x1000, 162K)