Why is Intel dying?

notebookcheck.net/10-core-Intel-Core-i9-10900X-disappoints-in-Geekbench-scores-10-less-than-the-AMD-Ryzen-9-3900X-in-multicore-tasks-and-falls-up-to-2-3x-behind-the-Threadripper-Zen-2-series.434985.0.html

Why is Intel dying?

Attached: comment_E9bsE98vlhFeUrsu3PgtLsOTugm8cUH5.jpg (684x794, 209K)

It's clock starved compared to the smaller 8700k/9900k and cache starved compared to 3900X

10900k shows 5200 single thread
9900k shows 6300 (it's clocked higher)
3900X shows 5900 (it has a lot of cache)

I don't fucking care about multithread because the enduser is going to choose how many cores he needs, the dumbuser will burn his home down (regardless of red/blue).

And at the end of the day the 10900k is a 10 core part and I just don't give a fuck. the 8700k was fine.

We all knew the 10th gen was going to suck balls but my god, the excuses are going to be awesome to read

simple, more cores = less clock speed

Regardless, I don't ever want another CPU pulling 250W. Fuck that.

is that going to be an HEDT chip? the 3950x is already beating Intel's 18 core part, and this is their best response?

Attached: 1542935171741.png (1349x757, 972K)

Why didn't Intel add more cache?

Cache is HOT and takes up exorbitant amounts of die space.

They have low yields (cobalt in their process cracking and shit) and larger cache would fuck yields by an order of magnitude.

>Worse clocks
>Worse IPC
>More expensive than last gen equivalents
Intel's very own Bulldozer is finally here, by the looks of it.

intel is sandbagging, except 10-15% better results than this

this along with more cache = higher latency. more cache doesn't always = better performance. it depends on the task. generally all you need is enough cache with the lowest latency you can get.

I really don't understand why it is so difficult for the community to understand what is going on with Ryzen and gaming performance. The reason is simple.

You have all seen the AMD diagrams that show the Infinity Fabric. They clearly show interconnects between each chiplet and the IO die and list it at 32 bytes/cycle.

You know that with the 3000 series of chips, the Infinity Fabric tops out at roughly 1800mhz.

Doing the maths: 32 bytes x 1800mhz =~57GB/s

The theoretical maximum limit of dual channel 3600 MTs RAM is ~57GB. With latency overheads, you can test that at about 51GB/s in Aida64.

All that is great if you run cinebench, blender, handbrake etc. The CPU gets all the data the ram can supply. The processed output of the CPU ends up in the L3 cache where it is output to the monitor, storage or a memory address.

When you run a game, firestrike or timespy, the CPU has to process the instructions that are passed to the GPU. A 2080ti at Max fps needs about 15GB/s of instructions, textures etc to render it's many frames per second. The GPU obtains these instructions from mostly the L3 cache (game cache).

If the GPU is taking 15GB/s from the 57GB total bandwidth, that only leaves a Max theoretical bandwidth of 42GB/s before the latency overheads available for the cores to obtain data to process for the next instructions that it has to pass to the GPU.

.

Who the fuck are making these suicidal executive decisions causing Intel to crash with no survivors? This is on the level of when Stephen Elop crashed Nokia.

Attached: Uuuuuh.jpg (707x717, 37K)

Reduce memory bandwidth starves the CPU and the number of instructions to render frames is reduced.

Intel doesn't have the same limitations. On a 9th gen CPU the cache multiplier determine the ringbus bandwidth. The ring also transfers data at 32 bytes/cycle but the cache is clocked at around 4200mhz. That calculates to a Max theoretical bandwidth of ~144GB/s.

The bandwidth of the L3 caches on both Intel and AMD are roughly the same. And clocks cache at CPU frequency and Intel uses the cache multiplier. (Ever wonder why AMD chips don't overclock to 5GHz? It because the cache won't run that fast withing the power and temp envelope of the Ryzen chip)

The Intel dual channel 3600 ram still tops out at about at the same 50ish GB/s and the Gpu and the GPU still wants it's 15GB/s but it can run over a pipe that can carry 144GB/s. The CPU keeps getting data from ram at the maximum the ram can supply it and as a result, the CPU can process more instructions for the GPU

>Lowest clocked, low tier HEDT CPU from next lineup
>Gotta shill Gaymd still
I've been waiting for this thread since last night, it took a while, OP.

I'm doing my part, are you doing your part OP?

Attached: Screenshot_20190919-011058.png (1553x2048, 2.7M)

> source: Geekbench
> where stats are easily modified to say whatever they like
really convincing data. weeeeew. nobody that isn't retarded takes geekbench scores seriously anymore.

>benchmarks does not matter

Intel already had a Bulldozer, it was called NetBurst.

>and Intel uses the cache multiplier
Yet AMD were de EBIL when they did that to get Athlon to 1GHz first.

In other words, you're comically asshurt. Congratulations, may you be the first of many Incels drowning in your tears.

Stfu you pathetic AMDrones, you KNOW that the consumer parts will SHIT all over POOzen

Attached: 1562884870148.jpg (1280x720, 296K)

You look exactly like this, user.

Attached: 1544667387109.png (813x1402, 324K)

>Stfu
Like you're in any position to make such demands, loser.

>I don't fucking care about multithread

too bad that quite literally everybody else cares

>waaa, waaa, incel, incel!
You pathetic dweebs, imagine fanboying over a multi-million corporation. Sad.

They've actually done it before. 128MB eDRAM cache.
Adding more cache now wound't benefit Intel as much as AMD. Memory latency and support is pretty good, so improvement wound't be that big.

Attached: 14_intel_core_i7-5775c.png (407x403, 42K)

>incel

Attached: 1524525634387.png (550x550, 88K)

I just looked up some info on that chip and damn, the fps difference between edram and disabled i s huge in scenarios where the cpu is bottlenecked
one dude tested his CPU with overclocked core and edram, and basically the CPU at 3.7ghz with EDRAM enabled was faster than at 4.2ghz with it disabled
moar cache is definitely better

>-Intel-Core-i9-10900X-disappoints-in-Geekbench-scores-10-less-than-the-AMD-Ryzen-9-3900X-in-multicore-tasks-and-falls-up-to-2-3x-behind-the-Threadripper-Zen-2-series
We still have userbenchmark

Attached: 1491958654249.png (653x726, 84K)

sir ples buy intel

>low-tier HEDT cpu
>geekbench

>fps
>

Gaming is the only thing Intel has left, and even then it usually comes with more stutters and housefires.

nonono, delid this post goym you make me go bankrupt

Cascade Lake-X is just YET ANOTHER recycling of Skylake. It's quite literally just Skylake-X with some minor uncore changes and another + on the manufacturing process.

But the trade-off was that Broadwell couldn't overclock for shit. Even Broadwell-E parts overclocked better than the 5775C, since they didn't have the L4 cache, whereas Haswell could average a good 500-600MHz higher on average. Unless you needed a better iGPU, it was a pointless CPU.

>But the trade-off was that Broadwell couldn't overclock for shit.
It was mostly due to initial 14nm process. Even with eDRAM disabled they usually top at 4.2GHz-4.3Ghz.
Yeah, it was pretty nice mobile part that was quite pointless on desktop ( considering Skylake release in a few months), but it's a fun CPU in retrospect. Due to IPS improvements it's pretty close to Haswell and due to eDRAM it was better in gaming than 6700k or even 7700k in some cases.
Generally it was a fun idea in kind of useless product.
Note that it was back in time of DDR3. Most of the people used something like 1600 or even 1333. 35-40ns eDRAM with 50-60 GB/S with asymmetrical write was quite a big improvement.

Is it stil 14++++++ nm?

10nm never ever user

>Intel 10nm
>producing 10 core chips

Attached: 1568069457341.png (1496x1042, 560K)

It will be 10nm as soon as you can get a cute gf(male)

And that's a good thing.

Why are they even bothering with HEDT anymore?

Nokia is a great parallel actually. A company that had tons of great technologies and developments marred by a bloated and incompetent management echelon that keeps wasting money on ventures and projects that ultimately go nowhere.

Because that's planned for Tiger Lake.

i want to nakadashi jahy

A 3900X consumes 400W at load. After using the 8320 at over 200W, never again. Ideal should stop around 100W, whether or not you're a power user, gaymer, whatever. Moar cores.

>A 3900X consumes 400W
I can't find anywhere claiming this much

>A 3900X consumes 400W at load
Lying shill detected

Attached: file.png (650x450, 55K)

>400 watts

my god intel fans are so desperate..

KILL YOURSELF INTELSHITTER

Attached: 1562666530056.png (1302x688, 73K)

an EPS12V cable can only produce ~200W

I like how 8700k with 6 cores consumes more power than 3900x with 12 cores. In a 100% multithreaded test.

That's actually a threadripper sorry I'm old. But the 3900X exceeded the safety limits of EPS12V nonetheless. Wprime95 and R15 not some gay tom bench

There won't be another Core comeback, ice lake doesn't look too impressive.

kiss my ass kiddies

Attached: Screenshot 2019-09-19 at 12.07.52 PM.png (1366x768, 209K)

kek i mean why even bother lying

Quite shocking honestly.. Intel is truly a housefire

That's what happens when you spend 5 years juicing the same architecture on the same process node.

It's cock starved, same as Intel customers who just can't live without butt fucking

Ultra pozzed

> modified geekbench scores, used to manipulate share market price and consumer opinion, is fine by me
based retard AMDrone. cope and dilate harder, faggot.

Attached: 1537859620696.png (488x440, 214K)

Mhm

Attached: cab4fb77098f4586e66acc70182ef9e30e35e4dc2f96cf719b42363aad9cd7d8.jpg (640x402, 105K)

Just stop. You are far too old and senile to be shilling on the internet. This is easily one of the most pathetic posts I have seen on this board.

>> modified geekbench scores, used to manipulate share market price and consumer opinion, is fine by me
>based retard AMDrone. cope and dilate harder, faggot.

Attached: 1506977173618.jpg (882x758, 324K)

Zen will never open the gaming performance door, its cores are too weak and scared to do so. Intel is brave, strong and isnt afraid to show off its power consumption like a gym chad shows off his body odour.

FAke
This has to be fake
Kysss

>except

Attached: 1530424737238.jpg (682x900, 70K)

>3900x actually consumes less power than 8700k, 9700k, 9900k, 9900kYs
the worst attempt of shilling ever. Just give up man

on behalf of the intcel (tm) corporation, thank you for your service to israel

No shit you can't, the 3900X at stock has a peak power limit of 142W. Obviously you won't find one pulling 400W.

Oy vey

>17% less cores than 3900X
>only 10% less performance
damn, looks like another win for Intel. I'm surprised AMD is still in business after all these years

And for only twice the price. What a deal!

Just delid and delap it bro

delid dis you stupid goy

You don't know the price for a CPU that isn't even out yet idiot

Sir please do the needful and buy each and everything from intel.

Attached: 23008.jpg (1200x689, 163K)

Do more needfuls and buy all intels

It can't be cheaper than 3900X which has an MSRP around the same as the eight core 9900k. It will probably be competing in price with Ryzen 3950X, if not higher.

a-at least we still have diversity and inclusion intel bros...

Attached: 1563063007492.png (1571x901, 1.09M)

>10900X

You can't go past 4 digits without it sounding retarded.

>ten-nine-hundred-ecks
what sounds retarded about that?

It's right there on the paper, it maxes out the 4+4 pin EPS12V connector

But burn it up man

Where's that picture of the Intel wojack being crushed by overpixilation?

They should switch to Roman numerals to keep the numbers small. So it would be an X900X

Yah ok dude.
Let's say I want to go Intel. Recommend me a good chiller.

Attached: 1550692917242.gif (640x360, 1.85M)

> sandbagging
> literally still using 14nm
lmao how do you sandbag 14nm shit?

lmao the 3900X already beats the 9900K in some games, what are you talking about?

> core count performance scales perfectly
the absolute state of Intel shitters

>some games
Intel holds the performance crown in 90% of games and you literally can not prove me wrong.

Literally margin of error.

Attached: 3900x-vs-9900k-gaming-adoredtv-1.jpg (1152x768, 106K)

>6% faster is margin of error
>AMDoredTV

Yeah 3900X is 6% faster in one game, anything around 5% is margin of error. Also cope.

How the fuck is it losing in Division 2
It literally boots up with "AMD Ryzen" screen
Fucking Ubisoft

>>some games
>Intel holds the performance crown in 90% of games and you literally can not prove me wrong.

Attached: 1560228829446.png (519x543, 151K)

Yeah 3900X is 6% faster in one game, anything around 5% is margin of error. Also cope.

Have sex.

Dilate.

You mean webm?

Keep in mind this is just another Skylake refresh.
the first REAL improved Intel chip will be Desktop ice lake under the sunny cove architecture in late 2020 or early 2021.
Sunny cove has a fuck ton of improvements Intel should have been doing this whole time.
like doubling the cache, widening pipe lines ex.