So let's recap

So let's recap

AMD now has

>best IPC
>on par or better single threaded performance
>best multi threaded performance
>best price to performance ratio

Also remember that if it wasn't for AMD you'd still be buying $300 4 core 4 thread CPUs. But Zen 2 sucks because it can't do 5ghz, right shills?

Attached: 2019-01-09-image-34.jpg (2048x1445, 150K)

Other urls found in this thread:

anandtech.com/show/14419/amd-confirms-zen-4-epyc-codename-and-elaborates-frontier-cpu
youtube.com/watch?v=rhA_DbxfBsg
tomshardware.com/reviews/amd-ryzen-7-2700x-review,5571-11.html
twitter.com/Cat_Merc/status/1133337499558928385
jedec.org/document_search?search_api_views_fulltext=jesd79-4 ddr4
en.wikipedia.org/wiki/DDR4_SDRAM#JEDEC_standard_DDR4_module
realworldtech.com/forum/?threadid=184972&curpostid=184998
en.wikipedia.org/wiki/DDR4_SDRAM
twitter.com/SFWRedditGifs

Those are irrelevant, AMD beats the 9900k in all scenarios while using a 40% of the power, that's impressive.

>But Zen 2 sucks because it can't do 5ghz, right shills?
Those shills are also unduly assuming that TSMC 7nm HPC has the same voltage scaling as GF 14nm LPP, and that Zen 2 won't have any overclocking headroom as a result. There is no good reason to assume the voltage scaling will be identical, and we should wait to see how Ryzen 3000 overclocks before even entertaining silly arguments about 'muh niggahurtz'.

>gain the upper hand on AVX2 floating point loaded code
>must mean better IPC and performance all around

SOPA MACACO can't read.

>Shills actually trying to argue Zen 2 doesn't have better IPC when Ryzen 3000 CPUs beat the 9900k in Cinebench R20 single threaded at lower clocks.
Lmoa

Attached: inteliskill.jpg (2053x1025, 220K)

>on par or better single threaded performance
According to what their selected benchmarks?
That pubg gameplay and strange Brigade was handpicked embarrassment.
I still think it won't beat 9900k maybe in few titles on that 12 core CPU.

>what is xfr
ever since they introduced that shit, old school overclocking was made pretty much irrelevant. Every chip boosts itself to its limit based on the cooler's efficiency. If a chip's xfr range is specified at X GHz, you won't be able to overclock much further either. 100 MHz tops.
Out of the currently announced lineup not one cpu will reach 5ghz on air. Kinda sucks desu because this way amds offering will be just "better" instead of the literal shoah amdronedtv hyped up for months.

>The newest version of the industry standard Cinebench benchmark isn't representative of actual performance even though the whole industry uses Cinebench for benchmarking
(lol

Meanwhile, at Intel...

Attached: Fine print.jpg (1200x255, 231K)

Oh, they announced what the XFR range for those CPUs is? Please point me to the announcement, I must have missed it.

holy fuck youre dumb

xfr has always been an additional 100mhz bump

>$1.200
wait what

Attached: 1412324895681.png (1334x750, 2.06M)

anandtech.com/show/14419/amd-confirms-zen-4-epyc-codename-and-elaborates-frontier-cpu

Excellent argument.

Yes, that's why the XFR2 frequency of the 2700x is 150mhz higher than its boost clock.

Not even to mention that XFR2 was heavily changed vs XFR1 so you can't even assume what XFR3 will do, and you can't assume that manual overclocking won't be able to achieve higher clocks than XFR on Ryzen 3000 precisely because we don't know the voltage scaling in the relevant range yet.

GF 14nm LPP and 12nm havomg horrible voltage scaling at the top end is *why* XFR works better than manual overclocking on Zen and Zen+. It *might* be the case that this also holds true on TSMC 7nm HPC, but it also might be the case that manual overclocking can achieve higher frequencies with much beefier cooling.

>may not reflect all publicly available security updates

HAHAHAHA HOLY SHIT WHERE WERE YOU FOR THE {{{SECOND}}} HOLOCAUST

SHUT IT DOWN

I just wish they made a new threadripper with no shit tier latency issues like their flagship was.

>inb4 they stop making them altogether and you can only buy 9k ebyg chips that you can't OC and run at 3ghz instead of giving us a refresh

>>$1.200
>wait what
It's the price you pay for the best.

just an mainstream AM4 CPU shoa' an HEDT Intel part in every metric possible
so nothing to see here, goyim

>It's the price you pay for the best.
that's 499$ starting in July 7th

2950X is top tier even in gaymen

youtube.com/watch?v=rhA_DbxfBsg

Jeeze AMDbros i sure wish we had a cpu that was able to clock to 5ghz, oh well, the wait continues.

Attached: image.jpg (320x252, 75K)

Again, Cinebench R20 is a nAVX2 fp workload. The application-specific IPC you get in it isn't representative of IPC in general. On AVX512 integer workloads for example it has a great IPC of zero.

>that's why the XFR2 frequency of the 2700x is 150mhz higher than its boost clock

my own experience and from what i've seen right now with a quick google search suggests that my claim is closer to the truth
tomshardware.com/reviews/amd-ryzen-7-2700x-review,5571-11.html

Attached: 1441664673049.png (501x484, 486K)

>May not reflect all publicly available security updates
>no product or component can be absolutely secure
>performance tests may have been optimized for performance only on intel microprocessors

Maximum damage control

Attached: image.jpg (600x583, 311K)

>AMD told me so they would never lie, please forgot about Bulldozer era
Jow Forums is retarded for actually taking any vendor seriously with their cherrypicked numbers. Wait for the real benchmarks to come out that show the reality.

Attached: 1547680177227.jpg (1200x1200, 381K)

>no product or component can be absolutely secure
>no product or component can be absolutely secure
>no product or component can be absolutely secure
>no product or component can be absolutely secure
kek literal cope

>Cat talking about benchmarking
Snowflake stop posting while steve is in Taiwan.

It won't beat 9900k in blizzard shitty MMOs though because they use intel compiler.

>Getting hyped before the product hits the market and is tested by consumers

Attached: 6785888.gif (449x321, 2.68M)

Considering AVX was the only field intel held advantage over AMD on, how's that bad? IPC leadership claim for AMD is still valid.
That's HEDT platform. They cost a lot higher because of higher margins, though they also offer more memory channels and PCIe lanes.

LOL at the first two posts desperately trying to justify their AMD systems via cognitive bias and horse blinders.

It's well-known that AMD hardware is subpar compared to Intel + Nvidia hardware. This is also further compounded by the fact that Intel + Nvidia have better drivers and a better relationship with game developers. Intel + Nvidia is well-supported and well optimized.

AMD is the budget-bin bargain thrift-store 2nd-rate bottom-bucket off-brand no-name poor-people choice. AMD tried to make up for this with Mantle, but Mantle failed.

Why would you go to McDonalds for a hamburger when you can go to 5 Guys or In and Out?

Every single reviewer will do it differently, some will say 9900k still has better single core, some others will say 3800X has better single core, different JEDEC, different coolers, different metrics, it will be a complete clusterfuck

All the tech reviewers are AMDrones now

There will be some reviewers using delidded + lapped + massive watercooler for 9900k and claim it's superior to 3800X

I remember when I was 15

AMD's IPC claims were calculated used SpecINT, FP performance should be even better.

In and Out taste like shit and the fries are cardboard.

AMD beat Intel in pubg, which has the biggest performance gap between AMD and Intel, intel is fucked

That's obviously JayztwoIncels

>Considering AVX was the only field intel held advantage over AMD on
lol

and what they showed isn't even testing all of AVX

that idiot always had slandered Ryzen.

>Intelshill reduced to MUH AVX.
Literal BUY MMX Please.

WTF is this? 1995?

Have a source for that?

Which part? It's in one of those footnotes from the half a hundred powerpoint slides, people were talking about it on twitter.

Not really

>Linus
Effectively declared himself to be on AMD's side, all the AMDrones who were shitting on him for sponsoring Intel are the same people embracing his shift in alignment
>Hardware Unboxed
Revealed his true AMD colors, not even hiding it anymore
>GN
Still peddling the 16-core meme, didn't even show up at Intel's keynote despite being invited
>Jayz, Paul
Not even at Computex
>Bitwit
No videos yet
>HardwareCucks
The slav is a full blown AMDrone now, sends his pajeet to do Intel videos

Please post a link to the slides.

Because people are clueless and think that PUBG and Cinnebench do the same kind of workload, one fully stresses all the core resources and without significantly widening the core you'll just get a moderate increase.
While the other stresses the memory subsystem, which AMD massively improved.

>AVX512
Why would you run your CPU at 1.6GHz?

>dronedronedronedronedrone
easy there piednoel

I'm not denying it, still it won't beat intel in blizzard games without spoofing CPUID.

Jayztwocents is an AMD hater.

PUBG is CPU bound as fuck. It will use 4 threads and nothing else, the GPU keeps going to 0% use waiting for the CPU to deliver.

Do we know what ram they used? They could've used some insanely fast ram for all we know. I'm waiting for some independent benchmarks

PUBG, any pretty much any kind of awful optimized game depends on memory and cache latencies more than clockspeed, cores or a large execution pipeline.

you realize that there was this one leaked Engineering Sample that had single channel memory right, and it still BTFO'd

Intel is in the "lies and bribes" phase now.

AMD always uses the supported memory speed for their stuff, so 3200, B-die or not I dunno. But it should be way less relevant than with zen1 due to bigger cache and 2.5x faster IF links

In cinebench, right? An entirely different workload from unreal4 based crappily optimised games

Jow Forums doesn't embrace Linus and nobody cares which company he shills for, he's still a sellout whore.

GN retard was the one who claimed Ryzen was vulnerable to Meltdown so he's a fucking retard.

except it doesn't suffer from the same performance penalties as the Zen1/Zen+ chips.

I believe that the IF speed is now independent of the Memory speed. This should explain why a single-channel system would maintain that kind of score.

JEDEC ram most likely

Here's a clock for clock comparison in geekbench subtests for zen2 and zen+

twitter.com/Cat_Merc/status/1133337499558928385

As you can see, quite a lot of improvement for beta/alpha firmware.
Cinnebench is just one very specific benchmark, other benchmarks stress different parts of the CPU.

>JEDEC RAM
you're tech illiterate. JEDEC is a standard. Now fuck off back to /v/ and play your games, inturd.

>muh consumer low/mid tier brackers

INTEL IS STILL FASTER ON THE HIGH END BITCH

Are you fucking retarded? Are you the same retard who claims pissmark is more valid than cinebench too?

>jedec.org/document_search?search_api_views_fulltext=jesd79-4 ddr4

fuck off back to /v/, Inturd.

kek'd
What he said.
JEDEC is the standardized rame timings.
2133/2400/2666/2933 ect.
Meaning literally every ram sold on the market or else it won't work with your CPU.

>Intelshill is /v/ermin
Now we know.

en.wikipedia.org/wiki/DDR4_SDRAM#JEDEC_standard_DDR4_module

You just know the inturd is steaming right now lol

>mentioning Bulldozer on Jow Forums
DELET

Attached: ayyy.jpg (552x661, 71K)

cope

Attached: intcel_cope.png (1200x901, 60K)

geekbench isn't even a real cpu benchmark, no reason to put so much effort into analyzing its results.

Subtests are important here, and very relevant.
What is not relevant is aggregate score, which everyone seems to eat up like an idiot.

That's much more interesting than any of the very poor marketing AMD has put out so far, despite toddlerbench being biased against x86.

As expected, a 15% increase in semi-synthetic IPC in cinnebench would translate well into 20%+ in mixed workloads like games and compiles.

OSX only runs on Intel CPUs so what does any of it matter?

Attached: 2152_full.jpg (540x404, 38K)

Geekbench is a shitty and biased benchmark
other than that zen 2 does indeed have a 15% IPC increase

false

Fool.

unironically kill yourself

In fact, ryzen can run osx. But if you have the brains to choose CPU for yourself, why can't you choose a proper OS?

realworldtech.com/forum/?threadid=184972&curpostid=184998

I am always surprised that the brand wars here are even more retarded than the console wars on /v/.
I have to believe you are paid to post here, because this kind of effort on behalf of companies that routinely feed you shit, lie, and cheat you is just beyond any stupidity i want to believe you are capable of.
You fucking faggots.

Attached: 1545272154505.jpg (1200x1688, 297K)

Kys retard.

Attached: 2013-08-07-edne-gp-jedecamdcapture.jpg (595x445, 47K)

> on behalf of companies that routinely feed you shit, lie, and cheat you
Intel shills are literal cuckolds.

So are AMD, Intel, or any other brand you suck the dick off. Don't be so eager to miss the point idiot.

>on behalf of companies that routinely feed you shit, lie, and cheat you
you're the actual stupid one though

being so immature emotionally that you perceive slight advertisement exaggerations as personal betrayals

You are a just customer and purchase a product based on price and performance pathetic manchild

Attached: 1506969837843.jpg (590x590, 67K)

en.wikipedia.org/wiki/DDR4_SDRAM
Look on the right side of the screen

>developer:JEDEC

OY VEEEYYY
You're the only dumb fuck around here.
All DDR4 is JEDEC

WHATABOUT AMD

AMD IS THE CHEETAH AND JOOS TOO

WHATABOUT AMD GOY

>JEDEC is a standard
>developer:JEDEC
Kys retard.

>up to 15%
So still below skylake then, lmao

IMAGINE BEING AT COMPUTERS!

Cope

I'll be here in 30 years watching Jow Forums see massive IPC increases over decades and still saying
>Still lower than Skylake IPC

Actually it's both retard.
It's a standard and JEDEC is the ones who developed it.


en.wikipedia.org/wiki/DDR4_SDRAM
Scroll down to here >
en.wikipedia.org/wiki/DDR4_SDRAM#JEDEC_standard_DDR4_module

BOOM.
Nice goalpost though.
The point being they didn't use any "special JEDEC RAM"
They used basic bitch consumer and this is just a example of their IPC optimization not being dependent on RAM like Zen1 or Zen+

>So still below skylake then, lmao
Do these shills not realise that they are shitting on the newer cpus that they're always trying to shill?
>hurr, muh 5 year old chip from incel is better than my 1 year old chip from incel. Incel wins again!