Dm/g/ - Dick Measuring General

Poast em

CINEBENCH R20: 9820

Attached: CINEBENCH R20 20190309.png (358x312, 18K)

What cpu ?

>What cpu ?
2x E5-2699v4 ES
ya I know engineering sample

Attached: meh.jpg (505x311, 59K)

I feel like cinebench overvalues SMT/Hyperthreading far too much.

It's near perfect scaling when in reality that wouldn't happen outside of niche use-cases.

Don't you know that Cinebitch does not count anymore? They are disgusting jew shills now.

>2x E5-2699v4 ES
>ya I know engineering sample
Enjoy your spectre/meltdown/L1TF/Foreshadow, for they use beta microcodes

Seems legit

Attached: memebench.png (1646x798, 2.09M)

What are you running a 486?
Because if not, you're just as vulnerable.
I got microcode updates because Broadwell-E is the same as Haswell-E
Tick Tock motherfucker.

like 5k on my threadripper

this
version 20 a shit

Attached: 1549489201254.png (1920x1040, 88K)

where in the christ do I get it without the microjew store

I think Techpowerup has a copy that isn't tied to the store.

>inb4 poozen
at least I don't have buggy silicon

Attached: 2019-03-10 00_12_45-.png (694x1381, 209K)

instead you have literal piece of shit

Attached: 1523048238227.gif (402x308, 12K)

Honestly, big fucking difference that would make in an network isolated render machine.

t. anti-Jew AMD user

5ghz all core is pretty nice

Attached: lol2.jpg (1644x795, 376K)

1700 is on par with 9900K if they have same clocks like 4.0Ghz. Shame Zen struggles past 4.1Ghz.

I guess, but even at stock without OCing, the 9900k hits 4.7Ghz all core turbo.

And with OCing, 5Ghz is generally not that difficult, 5.1 or 5.2Ghz can get dicey though and will be up to the silicon lottery if your chip can handle it with safe voltage.

suuuuuuuuuuure

Attached: 1526987022092.jpg (450x599, 86K)

mind to clock it to 4Ghz all core for comparison to R7 1700 & R7 2700x?

sure give me a minute.

Attached: 4ghz.jpg (1645x795, 413K)

thanks m8

Attached: 1529600703357.jpg (750x597, 34K)

What are your scores? i'm curious myself.

jsut have an old ass desktop i don't wanna even test it man. gonna upgrade summer tho.
i wanted to compare both 9900k and 2700X, the Intel wins with ease this time around. wonder how far a wider FPU on Zen2 will get them

Attached: 2019-03-07 07_14_38.png (802x1195, 167K)

Can do 4.1 under safe voltages (1.33-35~) and 4.2 if I push it a little bit. But at 4ghz I see no difference in real usage scenarios and I can to that at 1.26v. Feels pretty good for 130€ and it's dead silent on my trusty 212 meme cooler with a custom fan curve even while rendering.

Attached: 2600 4ghz 1.26v.png (354x1050, 71K)

>4Ghz 1.26v
you sure it's stable? did you do some sort of stree-test? would be quite impressive if its stable like that

And reaches water boiling temperatures without adequate cooling solution, lmao

so would a 2700x, whats your point?

So what? Anyone with a 2700x or 9700k/9900k are running an NH-D15 or CLC water, or custom water.

You're a retard if you're using anything less on the current top end CPUs from either company

Cinebench is now an AVX2 benchmark m8, Zen2 should even it out tho

Until they change it to AVX512 when Intel brings that to the mainstream desktop lineup next year

Source? Never thought they'd do that

AVX512 is going to make intel chips even more of a housefire than they already are.

I can't wait until AMD implements AVX so you tards can complain about your temps.

AMD can do better, since they have years of experience in making MOAR COARS without huge hotspots like Kiketel housefires

Attached: Piledriver-Die.jpg (800x982, 479K)

No source, just rumors.

It did a 3 hours run of IBT set at maximum memory and and a few extra hours after that of prime 95 small fft and It didn't crash and I leave the computer encoding shit almost every night and so far it has not crashed.

And I mean, if a 2600 non x can do 4.2 on non obscenely high voltages it's a pretty good chip.

Nah it's been confirmed in road maps for 2020. The i3 8121u already has AVX 512 support, the first consumer CPU to have it.

Attached: SmartSelect_20190310-155810_Samsung Internet.jpg (2517x865, 483K)

But AMD already implement AVX...

Yes and no.
AMD currently splits 256 but AVX instructions into two 128 bit AVX pipelines

Intel has native 256 bit AVX pipeline so they don't have to reduce performance and increase latency by splitting the instructions.

Ryzen 3000 should be fixing this

>256 but
Bit

Not reusing a CPU design for 20 years also helps

Attached: 1524930805021.png (884x408, 376K)

Attached: cb20.jpg (909x872, 250K)

>not a single CPU implements the full feature set
this is peak Intel

It's primary use is scientific research type shit.

Further, not all functions are going to be useful without a very particular need.

Why is your wallpaper a picture of a poor person's room?