Wccftech runs some bullshit about zen 2 getting btfo'd in geekbench

>wccftech runs some bullshit about zen 2 getting btfo'd in geekbench

in reality with the same ram speed intel gets ass raped in single core score too, and AMD actually out performs multi-core somehow despite both are the same thread count.


AMD is 'leaking' all of these low ram speed benchmarks because they're sandbagging.

Attached: brain dabblige.jpg (2202x858, 389K)

Other urls found in this thread:

browser.geekbench.com/v4/cpu/11823322
hardwareluxx.de/community/f13/die-ultimative-hardwareluxx-samsung-8gb-b-die-liste-alle-hersteller-17-06-19-a-1161530.html
newegg.com/g-skill-16gb-288-pin-ddr4-sdram/p/N82E16820232217?item=N82E16820232217
twitter.com/NSFWRedditVideo

And how does ram speed scale for both systems? Geekbench is a ram speed dependent benchmark, and as we can see clearly, AMD nearly matches intel already... at .7mhz less clock speed. 3800x is already clocked at 4.5 stock. The 3800x's higher multicore score almost completely proves the ipc gain.


Zen 2 is going to be the next sandy bridge in value

Attached: objective truth.jpg (1976x867, 381K)

also geekbench is retarded and lists ram clockspeed at half of the effective rate, so 1600mhz is actually 3200.

you actually are taking seriously wccfttech?

Sandy bridge didn’t require bleeding edge expensive ram and 16 phase 15w tdp motherboards with a fan. It also beat AMD in single thread/IPC rather than just brute forcing muh coars.

IDF was on full force today and there were two threads about the same article.

Just correcting the record for any impressionable summer fags to learn to not be fucking retarded

>wcctftech doesn’t matter!
>geek bench doesn’t matter!

and let's ignore the other benchmark the same guy ran that you conveniently left out

browser.geekbench.com/v4/cpu/11823322

die shill

>it's i9 vs r7 comparison all over again despite r7 is made to compete i7

>market segments don’t matter!

i9 competes with threadripper, dingus

took you long enough.

Please everyone in the class ask me how does a cpu preform dramatically worse at the exact same speeds?

Did the person benchmarking the chip run a whole bench with the cpu thermal throttling? hahaha honestly maybe. But what really happened is these are before/after SPECTRE/Zombie load mitigations. We know this because the multi-core score is dramatically worse on the slowest run.


multi-core difference: 38%

single core: 7%

thermal throttling would scale equally across all performance, spectre and zombiecum mitigations hurt smt performance significantly.

Attached: intel security mitigation performance,.jpg (2202x858, 409K)

How can ppl defend Zen 2 when it needs 5,000mhz ram to compete with Intel's 12yr old arch?
The Ddr4 alone will cost more than most ryzen systems.

No

>still shitposting this hard
it requires 3733mhz exactly lmao

how do people defend intel when they need 5000mhz cpu to compete with pajit cpu company?
the electricity costs alone will cost more than most ryzen systems

>But what really happened is these are before/after SPECTRE/Zombie load mitigations. We know this because the multi-core score is dramatically worse on the slowest run.
cool non-sequitur

it's a bios update to support the RAM hang yourself shill

>shill calling others a shill

And that's the point of my post, retard.
R9 competes with i9 and R7 competes with i7.

i mean the r7 matches the i9s performance, and at the same price the r9 btfos the i9, at least in multi-core.

intel is getting the rape rn

It can try.

>my paper cpu totally btfo of intel just look at these official AMD slides from computex

Oh dear

>bleeding edge expensive ram
You only need standard mid-range RAM.

>16 phase 15w tdp motherboards with a fan
That is the high-end. Those mobos are not required at all.

>It also beat AMD in single thread/IPC
AMD matches Intel in single-thread and destroys it in IPC, all while giving you much better multi-thread performance for cheaper.

Dumbfuck.

>every image in this thread is an actual verified benchmark result

>mentions computex


you've coped your brain into another dimension friend

It's an AMD marketing team campaign under the guise of a ''''''leak'''''''' with intentionally crippled RAM and no objective way of extrapolating to reasonable speeds so tribe members and the press gossip endlessly and futilely about it and generate free mindshare

how does AMD fudge the Intel bench though dumbass

numalebench n of ones in completely different environments with hundreds of different factors that can explain discrepancies

i'll wait for actual controlled 3rd party benches with some semblance of empirical methodology

>verified

Ahahahha imagine ACTUALLY believing this

Intel doesn't need fast ddr4.
Gains from 2400 to 4000mhz are minimal.
Ryzen requires it due to how Infinity Fabric works.
Isn't a 5000mhz kit like $400?

>shill cant read

Intcels are illiterate

When are we getting bench leaks that aren't made on shit RAM?

how many times are we going to tell you that the real premium is 3600MHz. Even then that's just a 1ns difference from 3200MHz, which now costs as much as 2666MHz

Fuck off

>We know this because the multi-core score is dramatically worse on the slowest run.
Look at the date on the tests, they're nowhere near when zombieload was announced and patched

Stop lying

Why are all the intel benchmarks gimped?

Attached: kjhsdbgf.jpg (854x841, 158K)

Because AMD good, Intel bad.

Anything showing intel winning is just using jewish tricks to beat AMD, so it shouldn't be trusted.

> oh, it's the I'll try to replicate the same test with different settings guy.
easy. You're running geekbench on a version of Windows without the mitigation patches.

I just don't understand, that's my own test and every other test I've seen in this thread has been dog shit slow comparatively. Yeah i'm using 3600 ram but so what? All i did was click XMP on without any overclocking.

I'm not gimping a gaming computer and never will. Suck my dick

Attached: Fuck npc's.jpg (464x356, 71K)

Attached: fdfdf.jpg (856x816, 153K)

Geekbench says Apple A12 has literally twice the IPC as all these poverty chips.

Yeah, but some fantasy land bullshit security vulnerability could POSSIBLY look at the hentai you watch and the lolis you collect.

Gotta love the AMDshills actually claiming they give a fuck about security.

Their only crutch is to hope intel users patch so they can feel superior, Newsflash, People that own home computers don't give a shit about patching, It's pretty all much for data centers and pedophiles.

It's a pity they have good chips, Literally the most evil greedy company in the world.

>Windows without the mitigation patches.

It's the latest version.

>geekbench

Attached: 1552766527841.png (942x810, 425K)

>geekbench

Attached: 1505861874103.png (907x460, 38K)

Mac's are notorious for their weak cooling, That guy doesn't have a clue.

Technically Geekbench are correct because that's the physical clockspeed, but DDR RAM has always been ---advertised--- at double the speed because of the double data rate. So 3200 MHz DDR RAM does in fact run at 1600 MHz, but it can throughput as much data as SD RAM at 3200 MHz. Speccy among other programs will display RAM the same way as Geekbench.

>posting 2 days ago an 8k 9900k as stock is real
>getting called out and deleting the article is ok
>posting another shitty benchmarks with 6k this time
>gets called out
>probably gonna delete the article again

i see a pattern

The retard who wrote geekbench also insists it's good for cross platform comparisons yet Linux x86 returns 15-25% higher scores than Windows run

Maybe Linux is 15-25% better than Windows.

Their chips are the same like everyone else.
Do you know what they do differenty?
They have reduced cores with higher power consumption.
Also geekbench is is recognised by ios and runs different system calls.
>Linus Torvalds, who is the creator and principal developer of the Linux kernel, criticized Geekbench versions 3 and earlier for showing partiality for ARM64 devices and being a bad performance measure ofx86-based systems
Check your facts next time

Attached: Toaster.png (426x426, 142K)

Yeah a 9900k easily gets well I to the 6000s scene even on turbo boost

Amdocs ppl Gunna be disappointed d when it's the 3900x not the 3800x that competes with he 9900k.

Though experiment for you glowniggers. AMD knows the performance of their chips. If the 3800x demolished the 9900k, why wouldn't AMD have them comepeting against each other during their public presentations?

>inturd doesn't understand price brackets.

That is why the 3800X competes with the 9700K.

>in reality with the same ram speed intel gets ass raped in single core score too
that's due to AMD tying their fabric to RAM frequency.
They've put themselves into a bit of pickle there, with Ryzen 1+2 having shit IMCs.

The processors love fast RAM, but they also hate it.

Now on Ryzen 3, the IMCs appear to have improved dramatically (they did get to dedicate a new design to IO after all), althought we have not benchmarks yet.
However, they've put a break at 3733 GHz RAM frequency, after which infinity fabric dramatically slows down by half.
So while they have managed to run fast RAM, in excess of 5 Ghz on air according to their own presentation slides - once more, Ryzen loves fast RAM but also hates it.
They'll track Intel for midrange RAM, and suddenly fall off when you get into the enthusiast range.

I'm still curious to learn how they now handle double ranked and 2 dimm per channel configs. Of course nobody is going to test 4x16 (and later 4x32) on a consumer platform

The OP is talking about the 3800x taking on the 9900k. Stop being a gaslighting faggot as if you think you're fooling me.

that's why they're being compared. You better pray to your semetic God that AMD does not cross the threshold of under 10% against intel in gaming, because it's going to be a massacre, seeing how the damage that Zen, which was 10~15% slower in gaming on average (and only in stupid resolutions and low settings) has done. If that number becomes lower, and AMD has more threadcount over their intel counterparts, it's over.

Why is that Ryzen 3000 switches to 2:1 mode above the 3733 MHz threshold?

because AMD randomly decided that was the greatest idea they ever had.
Probably had trouble getting it to keep up on all CPUs.
There is a screenshot circulating from X570 bios which shows a separate setting for infinity fabric clockspeed, so it's
- pay to play (upgrade your mainboard goy)
- silicone lottery

My sandy bridge i7 cost 250 euro new and the board cost 60.
OP you're nuts if you think a processor motherboard and 3200mhz ram that costs 800 euro is good value.

Laptops can be good value, gayming desktop pc's are luxury components and are now priced as such

your sandy bridge i7 is btfo'd by a Zen+ 2600

> 3200mhz ram that costs 800
you're kidding, right?

Zen proc plus mobo plus ram = 800

3700x is $330 I think
16 gb 3200 mhz w CL 14 b die that can OC to 3600 mhz CL 16 is $100
Any motherboard that supports it. You don't need the top of the line x570 so that'll be like $150 max

That's like $580 at most. Fuck off Intel kike

>6400 scores now appearing for the Ryzen 5 3600

I didn't know you had insider info. Fuck off retard. Don't spew your conspiracy theories as if they're facts

Attached: 1561291834088.png (1121x844, 715K)

Retards still arguing about this shit on their 2c trannypads.

checked
everyone btfo

reminder to report the ban evader who keeps spamming geekbench results in every thread on Jow Forums

That doesn't prove anything you said about AMD just making up the limit for 1:1 scaling

>16 gb 3200 mhz w CL 14 b die that can OC to 3600 mhz CL 16 is $100

Any suggestions for that which isn't RGB LED shit? I've found this list, but sifting through it is a pain in the ass:

hardwareluxx.de/community/f13/die-ultimative-hardwareluxx-samsung-8gb-b-die-liste-alle-hersteller-17-06-19-a-1161530.html

Attached: children faggots rainbow.jpg (1200x1462, 236K)

>thermal throttling would scale equally across all performance,
No??? Where the hell did you get that idea? thermal throttling would hit multicore much harder. In fact it should barely hit single core.

>1c at 5ghz will be ~25w
>8c at 5ghz will be ~200w
Thus the CPU will thermal throttle to somewhere between the base clock and boost clock

It indeed is a trash benchmark

It's not, in real-life workloads. except on 32c threadrippers

newegg.com/g-skill-16gb-288-pin-ddr4-sdram/p/N82E16820232217?item=N82E16820232217

I got these on a sale. Worked with no tweaking ony x470 board and ryzen 2600. Gonna upgrade to 3700x or 3800x when they drop

c at 5ghz will be ~25w
c at 5ghz will be ~200w
Not exactly true because of uncore power consumption

More like
>1c @ 5GHz = 35.5W
>8c @ 5GHz = 200W

With all the io, uncore is definitely at least 12W. Might be more like 15-20 even.

Are those sticks better than 3600 CL17?

indeed. point stands.

Attached: 54263978_1007699022774017_2530309025283702784_n.png (640x960, 713K)

Those can OC up to that. Prob with CL16. It's up to you what you want to buy. Not sure how timings will affect ryzen 3000 series

Thanks user

Looks like I won't find anything below 130+€. But at least it's good know what to look for in the future.