Tell me if my math is wrong

Tell me if my math is wrong.

Overclocked to 5.1Ghz the 9900K gets 2208 in CB. Stock all core boost runs around 4.7Ghz on the 9900K. Engineering samples usually clock a lot lower than final silicon.

The difference between 4.2Ghz and 5.1Ghz is 0.9% so add that to the 4.2 on the 2700X to bring it up a little bit to 1896. If we add in the expected 15% IPC gains on Zen 2 we are looking at a Cinebench score of 2180. So slightly under the overclocked 9900K. That's assuming Zen 2 can hit 5.1Ghz?

Now tell me scaling does not work that way.

Attached: OC_Cinebench-multi.png (1327x1222, 65K)

Other urls found in this thread:

tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-11.html
youtube.com/watch?v=zi82xR2nT0E
twitter.com/SFWRedditGifs

None of that matters. AMD still haven't found a way to interconnect their cores, so it will still run games and any other latency sensitive applications like shit, despite their 0.00001 nanometers and 10 gorillion cores.

/thread

at 447.38pts per ghz for the ryzen, it would have to be running at ~4.55ghz to score the same as the stock 9900k. It'll be interesting to see it run at 5ghz

>The difference between 4.2Ghz and 5.1Ghz is 0.9%
what?

fpbp

t. intel shill

i'm of the opinion that software have some catching up to do with bleeding edge hardware like ryzen's separate compute clusters

>being this retarded
It's 8 vs 8. They scored the same, therefore they have the same performance per core, regardless of memory speed or interconnect. Only difference is Ryzen 3 isn't gonna burn your house down.

The more controversy a post creates, the more truth it holds.

>Overclocked to 5.1Ghz the 9900K
burns your fucking house down

>they have the same performance per core

Attached: untitled-12.png (682x908, 52K)

Your math is wrong.

Scaling does not work that way.

if the amd cpu was running at 4.6 like people are claiming, that means basically no ipc gain.
you have 2 possible scenarios
>the cpu was running at 4 ghz or lower, meaning the ipc improved a lot
>the cpu was running at 4.6, meaning no ipc gain and just improved frequencies
hope its the first one

>Multi-core doesn't matter!
>Productivity doesn't matter!
>Price/performance doesn't matter!
>Performance per watt doesn't matter!
>Power usage doesn't matter!
>Temperatures don't matter!
>Soldered dies don't matter!
>Stutters don't matter!
>Streaming doesn't matter!
>Data centers don't matter!
>Locked CPUs don't matter!
>OEMs don't matter!
>Hyperscalers don't matter!
>Upgradeability doesn't matter!
>Anti-competitive business practices don't matter!
>Locked platform features don't matter!
>Synthetic loads don't matter!
>PCI-e lanes don't matter!
>Burnt pins don't matter!
>Heat doesn't matter!
>1771w cooler doesn't matter!
>Server space doesn't matter!
>ECC support doesn't matter!
>Free RAID doesn't matter!
>NVMe RAID doesn't matter!
>StoreMI doesn't matter!
>IPC doesn't matter!
>7nm doesn't matter!
>HEDT doesn't matter!
>Stock coolers don't matter!
>Security doesn't matter!
>Games don't ALWAYS matter!
>Enterprise doesn't matter!
>Hyperthreading doesn't matter!
>VMware doesn't matter!
>MySQL doesn't matter!
>Unix doesn't matter!
>Linux doesn't matter!
>Waffer yields don't matter!
>Benchmarks after full patches don't matter!
>Asian markets don't matter!
>Own fabrics don't matter!
>Chipset lithography doesn't matter!
>Cray doesn't matter!
>Cisco doesn't matter!
>HPE doesn't matter!
>AZURE doesn't matter!
>5nm doesn't matter!
>TDP doesn't matter!
>10nm doesn't matter!
>Cache doesn't matter!
>IGPU doesn't matter!
>PCI-Express 4.0 doesn't matter!
>*NEW* Amazon sales don't matter!
>*NEW* Prime95 AVX doesn't matter!
>*NEW* Custom Foundry Business doesn't matter!
>*NEW* Performance doesn't matter!

Attached: 1539296460877.jpg (474x415, 37K)

That's a big list.

He's talking about the test they did in their CES presentation, which had the 8 core sample they used and a 9900k at stock scoring within 20 points of each other.

Trying to predict the performance in real life of pieces of silicon is quite a foolish endeavor.
Which means that both you and AMDrones can be quite wrong about it.

based

based and kekpilled

Engineering samples usually clock a lot lower than final silicon.

Can't wait for you to eat your words.

OP here. I was putting in the wrong calculations for 4.2 to 5.1 It's 19.35% difference I believe (4200 vs 5100). So 1879 + 19.35% = 2243 or thereabouts plus 15% IPC =2579.

The chip they used wasn't an engineering sample, it was a qualification sample, meaning the clocks are pretty much going to be the same for the final product.

Going by that calculation 5100 divided by 2579 = 1.97xxx then times that by 2057 which the sample scored gives 4067 (4Ghz).

Am I wrong?

Source: My ass.

it's reported everywhere you dumass

Awfully sorry, I must be blind, but can you kindly point out where the Ryzen 5 3600X is on that list mate?

Attached: 1362335281589.jpg (1600x1000, 167K)

They said something like 13% IPC improvement over Zen+.
Taking clock speed into account, it's probably running 4.0GHz.
Or 4.2GHz with 10% IPC improvement.

What we don't know is how high the final all core boost will be.
I hope at least 4.6 GHz which should bring it into the 2300+ range. Almost 2600 would be extremely high for expected IPC and expected clocks, unless they pull out a 5GHz all core.

:)

Attached: lmao.png (644x105, 14K)

5.1/4.2 = 1.21
The difference is 21% not 9%.
Results are 7.8% apart.

It was running at 4.6 GHz boost, meaning close to none IPC gains. Ryzen ST performance is still TRASH even after the node shrink and new arch.
Also Navi was so trash that they had to delay it even further.

Attached: 1489607623535.jpg (584x720, 209K)

Attached: ayy.png (252x255, 91K)

I forgot to add the 4.2Ghz 1879 score comes from the 2700X. I also was using 5.1Ghz instead of stock speeds. With stock speed of all core 4.7Ghz on the 9900K it works out to 1.95xx times 2057 which still is around the 4Ghz mark for the sample.

>The chip they used wasn't an engineering sample, it was a qualification sample
>it's reported everywhere you dumass
>Su lied about something easily verifiable when she said the processor was an early silicon ES and all these reports have photos of the QS processor to prove it.
If it's reported everywhere, then provide some sauce shill.

OK let me redo that based on the 2700X score. OK it comes in at about 4.1Ghz using 13% instead of 15% IPC.

The whole point of an IO die is to solve the interconnect problems, you fucking moron.

>4.6GHz vs 4.7GHz
>still wins
So, either there are considerable IPC improvements, or Ryzen always had better IPC than Intel, pick one.

If it was competitive in games and single thread performance, AMD would have showed it off. But they didn't . Matching Intel in multithreaded bench is not hard, AMDs smt is better than Intel's HT. so it's just a RyZen 2700 +200mhz and minutes few watts on power.

AMD didn't show single-threaded performance because it's 2019.

Too bad 99% of software is still in 2005.

Clock for clock AMD was beating Broadwell HEDT in Cinebench when the 1800X launched originally.
Not much has changed in per clock performance since then for intel. The latest refresh lake is almost margin of error with Broadwell when running at the same fixed clocks and memory.

The Zen2 ES wasn't running at 4.6ghz on all cores. Its going to be hilarious when SKUs are actually unveiled. Zen2 has a not insignificant uplift over Zen+ even in CB. Stuff like POV-Ray will look really good this time around.

Midrange 8 core Ryzen 5 beating a 9900K at 4.6Ghz all core stock boost clocks at half the power and probably closer to 4.4-5Ghz. 12 core and 16 core is going to tear Intel a new asshole.

You suck at math

Yeah I know. I should have done the following calculation. 2700X OC score in CB + 13% IPC. Then divided that by the 9900K all core stock of 4600Mhz times the sample score = 4.5Ghz or thereabouts. Just like Adored predicted.

using windows, it is like you want to be retarded.

Wrong. That engineering sample at 70w must have, at minimum, tied an 8700k in single core to beat the 9900k in multicore.

At below final clocks. Only a few hundred more would be enough to give the non-x version a 9900k beating single core score.

And then the 95w x version will clock even higher than that.

And then there will be 12-core and 16-core versions with similar single core performance and 2950 beating multicore performance.

American math, holy shit

Attached: Happy New Year.jpg (1296x929, 113K)

>It was running on 2666Mhz DDR4 and it still beat the 9900K idiot.

This, but AMD CPU also cost less so you get more for your money.

I'm happy with my i9 9900K so far though.

>that CES power usage
>3600x
It'd be the 3600, user.

>The difference between 4.2Ghz and 5.1Ghz is 0.9%
4.2 is 82% of 5.1

2700x uses more power than 9900k ?

in single core performance you fucktard?

>8/16 part matches the 9900K at half the power draw (65w TDP confirmed) on 2666Mhz DDR4.
>Space and traces clearly visible on CPU
>Thinks there won't be 12 core parts at least.
>Thinks this is an R7 CPU being demoed
>Thinks an 8/16 Zen 2 part is gonna priced higher than a 2700X
>Intards think TDP = power consumption
Intel retards are really cancerous

Attached: amd.jpg (1024x576, 72K)

Attached: LUL.png (1302x839, 840K)

I was expecting ALL AMD parts to have three dies, but less than 16 core parts just using progressively defective parts until you have like two chiplets with only 2 CPUs enabled for the quad core ultra cheap thing.
But i guess AMD can't make the chiplets that bad.

You're quite dumb if you think they won't use the other space in the substrate for more 8 cores.

OP here. My math really does suck. But after a little fiddling with the numbers to more accurately reflect what the actual numbers shown are it looks like it is a between 4.4-6Ghz 65w part, which aligns with the AdoredTV specula.

Are you high?

Attached: 1479574775361.png (653x726, 42K)

9900K beaten by a Ryzen 5 using 30W less. I guess CES was great after all.

I'll be looking forward to Computex even more now.

Attached: Happy New Year2.jpg (1301x1010, 137K)

A /v/tard with no technical knowledge speaks and then samefags.

Yes. Intel has 14nm++++++

Adoredtv is getting cocky even after getting BTFO at CES.

Attached: 324.png (573x164, 18K)

Shopped

Fake news (shooped)

He's a madman. No one can stop him now.

Attached: 22.png (812x794, 71K)

He was only 'BTFO' on speculating that Navi would be announced instead of Vega 2. But we don't know that his leaker was wrong, only that AMD changed what they wanted to show.

He was also only 'BTFO' on speculating that CES would be the event to announce Ryzen desktop 3000. He was wrong and mobile chips were announced instead.

Doesn't mean his speculated specs are wrong.

Reposting

Tom's review of the 9900k showed 137W for package power during cinebench

tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-11.html

>We measured 137W (232W) during the Cinebench test


So the system - CPU is probably about 43W on average. We know the systems were almost the same except the motherboard, for which we can probably assume power draw was about the same since AMD was interested in CPU vs CPU. So this new chip is probably closer to 95W TDP again. Not sure how he's calculating 65W.

I bet he's inventing everything, but his common sense did got some shit right, like the fact AMD would actually show Zen 2 instead of just laptop crap.

He's all in now. His reputation is already ruined so might as well go full retard and collect more shekels from his patreons.

Attached: 2.png (463x168, 17K)

>Trying your hardest downplay the fact that 9900K is going to be at least matched in multi-threaded application by a processor that will be no more than half it's price and with significantly lower power consumption
>M-MUH VIDYA
I only game casually on the side at 60hz in gpu bound scenarios, kike, I don't care about your 10% faster e-sport gay men, and even if I did, I still wouldn't buy your overpriced dogshit on principle.

Attached: STOP.gif (326x198, 224K)

>again

Attached: Thinking Ren.png (290x290, 173K)

Attached: ayymd.png (900x620, 51K)

I don't understand how people can say things like "RIP repuatation". He did always say he could be wrong and to take everything with large piles of salt. His channel is all about analysis and speculation, so I don't know how people can be all like "This is what will happen", or "Ryzen will LAUNCH at CES". I don't understand these people.

His speculation was wrong about what was going to be announced, but I hardly think he should slice his guts open because of it. His speculated Ryzen specs and Navi specs haven't been disporoven, in fact CES showed some truth to the 3600(X) specs.

He didn't say it was speculation. He said he got huge leaks and anyone saying otherwise is a fucking cunt, then he got BTFO and now he's doubling down.

Why does everyone say Ryzen latency is shit? I use Ableton on my 2600X and it's perfectly fine.

>Overclocked to 5.1Ghz the 9900K

Attached: 1543241975159.png (1327x1222, 69K)

youtube.com/watch?v=zi82xR2nT0E

camon mate
he did a fucking sheet with details and even put TBA and nothing got announced. the navi stuff was 100% wrong too. he fucked up big time.
he also told his drones to shit on Tim from HU and that came back to bite him in the ass too

delid this

Which could still be true. Just not announced as the lineup at CES. He said to take it all with a pinch of salt in his first video. Yet fanboys took it as gospel. He is not at fault here. Fanboys are (from both sides) and retarded tech tubers hyping it as real.

Based Tim.

>He didn't say it was speculation
Correct. He said it was from a source, and so he speculated things from it, which he said to take with massive grains of salt, aka don't take it as the word of god.
>he's doubling down
He has no reason not to, considering the main meat of his leaks were specs, which have been tentatively supported with the Ryzen5/9900K test result.

Anyone who believed that AMD would launch desktop 3000 were the real people BTFO from all this.

I like how he trashes Nvidia's comments.

Attached: 1546961725525.png (1024x576, 252K)

I think his is overclocking via pbo/xfr

>acts like nothing happened
>here's a new 10min+ vid for extra ads shekels
>his peons already forgot about his bullshit claims a month ago
>rinse and repeat

I'm not disputing that a central part of his information was about timing, or Navi, which were both wrong. I'm just highlighting that people are getting swept up in 'everything or nothing' states of mind.

>adored is full of shit theres no way his leaks are real! Nobody needs 16 cores on a desktop!
>HAH! the chip was only 8c/16t! ADORED BTFO FOREVER
>I mean, everyone KNEW it was going to be 16 cores! but theres no way it can run at those clockspeeds!
>it only ran at 4.6ghz! theres no way an engineering sample can improve EVER EVER

Attached: 1527022013095.jpg (267x297, 24K)

I think he fucked up with the TBA part (and possible the $ one). If the sheet only showed the specs, I think it was safer to show just the specs and that's it.

>he said to take with massive grains of salt, aka don't take it as the word of god.
He literally said people not to buy new CPUs becuae ryzen 2 comes after CES. Even blamed a guy working in retail that he downplayed his leak to sell his stock of old cpus.
This is not "grain of salt". Stop defending this piece of shit.

>One CCX
>interconnect latencies

We need badly to introduce IQ tests as a prereq to post

Attached: index.png (212x238, 9K)

this is the dumbest shit. thanks for the laugh.

I can't wait 6 more months amd ...

Just buy the i9 9900k. Looks like the best Ryzen 3k cpu barely beats it.

Okay, whenever they're ready to drop the price to 300

I'm actually curious what chiplet-to-chiplet latency would be like, though. If the Windows scheduler is too retarded to keep a game's threads in one CCX (8C/16T should be more than enough for a game), that sort of latency could matter. Maybe Windows won't be retarded or AMD can deliver something to help with that shit, it would be tedious if we had to fuck with affinity on 16C CPUs because of scheduler retardation.