Intel

>By entering GPU market
again ? they already failed horribly making discrete GPUs a while ago

>failed horribly
not to be that guy but there's a reason why they failed, it's because their gpu was way ahead of it's time in terms of architecture
youtube.com/watch?v=um-1fAVU1OQ

i don't know who's worse, arm shills or amd shills
says the nervously shaking man for the 500th time
that's because amd designed zen for the data center, not the consumer market. evidence of this, and by saying this will cause massive autistic raging by Jow Forums because Jow Forums hates gaming but loves jerking off to cartoon porn with their 12 core processors, by zen's regression in gaming performance. intel designed for the consumer market in mind because they had zero competition in data center and know data center will keep buying their shit regardless. since you know, they had no where else to go.

i've said this before, and will say it again:
>zen2 is a wonderful workstation architecture. that was the primary purpose amd designed it for. amd knows where the money is and that money is in the data center. intel has stagnated hard and data center are looking to replace aging xeon servers with something fresh. and will buy hundreds of new processors at 5 figures a piece. server rooms. cloud infrastructure. internet infrastructure. it quite literately scales well for that environment. its why a lower clocked, equal thread count, 3700x and 3800x are within the same ballpark (marginal back and forth swings) of performance as a 9900k operating at an all core clock of 4.7ghz in those type of workloads but use less power and clocked lower. but they, and the 3900x, outright lose to the 9900k in gaming because they where not designed for gaming in mind. they were designed for workstation type environments in mind. they were designed for epyc.

>I really don't understand why it is so difficult for the community to understand what is going on with Ryzen and gaming performance. The reason is simple. You have all seen the AMD diagrams that show the Infinity Fabric. They clearly show interconnects between each chiplet and the IO die and list it at 32 bytes/cycle. You know that with the 3000 series of chips, the Infinity Fabric tops out at roughly 1800mhz. Doing the maths: 32 bytes x 1800mhz =~57GB/s The theoretical maximum limit of dual channel 3600 MTs RAM is ~57GB. With latency overheads, you can test that at about 51GB/s in Aida64.

All that is great if you run cinebench, blender, handbrake etc. The CPU gets all the data the ram can supply. The processed output of the CPU ends up in the L3 cache where it is output to the monitor, storage or a memory address. When you run a game, firestrike or timespy, the CPU has to process the instructions that are passed to the GPU. A 2080ti at Max fps needs about 15GB/s of instructions, textures etc to render it's many frames per second. The GPU obtains these instructions from mostly the L3 cache (game cache). If the GPU is taking 15GB/s from the 57GB total bandwidth, that only leaves a Max theoretical bandwidth of 42GB/s before the latency overheads available for the cores to obtain data to process for the next instructions that it has to pass to the GPU.

>Reduce memory bandwidth starves the CPU and the number of instructions to render frames is reduced. Intel doesn't have the same limitations. On a 9th gen CPU the cache multiplier determine the ringbus bandwidth. The ring also transfers data at 32 bytes/cycle but the cache is clocked at around 4200mhz. That calculates to a Max theoretical bandwidth of ~144GB/s. The bandwidth of the L3 caches on both Intel and AMD are roughly the same. And clocks cache at CPU frequency and Intel uses the cache multiplier. (Ever wonder why AMD chips don't overclock to 5GHz? It because the cache won't run that fast withing the power and temp envelope of the Ryzen chip)

>The Intel dual channel 3600 ram still tops out at about at the same 50ish GB/s and the Gpu and the GPU still wants it's 15GB/s but it can run over a pipe that can carry 144GB/s. The CPU keeps getting data from ram at the maximum the ram can supply it and as a result, the CPU can process more instructions for the GPU.

intel is fine in the consumer market. thanks to amd inability to hit higher clocks, intel still stays relevant. the only thing hurting them is lack of thread count. if they gave their 9600k and 9700k smt then there would be near zero reason to buy amd unless you're autistic and believe the mitigation that also affect amd reduces performance down to core 2 quad levels. intel just needs to stop being a jew with thread and core count. phoronix.com/scan.php?page=article&item=amd-zen2-spectre&num=1
>AMD Zen 2 processors feature hardware-based mitigations for Spectre V2 and Spectre V4 SSBD while remaining immune to the likes of Meltdown and Zombieload. Here are some benchmarks looking at toggling the CPU speculative execution mitigations across various Intel and AMD processors.

data center intel is really fucked. they're gonna have to suck hard dick to keep contracts going. to the point of giving xeons away for near free.

>Ever wonder why AMD chips don't overclock to 5GHz? It because the cache won't run that fast withing the power and temp envelope of the Ryzen chi
huh so that explains why. always thought it was a failure of process manufacturing. its actually architecture. but i don't blame amd if their goal was data center first then it makes sense. you don't need 5ghz in the data center. let alone 4ghz.

>5 figures a piece
do you mean $100,000 per cpu or for new servers as a whole? that makes sense for a large rackmount worth but not per cpu.

Literally just buy intel buy intel buy intel buy intel dont buy amd or amd stocks buy intel buy intel
ty

Attached: 40648089_236091440585078_7778301539144482498_n.jpg (1080x1080, 997K)

Underrated

Attached: 1482580422087.jpg (705x592, 67K)