150W TDP out of the box

How will a mainstream 16/32 be viable for Ryzen 2? You didnt think about that, did you, faggots?

Attached: AMD-Zen-3-640x418.jpg (640x418, 14K)

*sigh*
AMD btfo!
*sigh*

I could see the Ryzen platform recertified for maybe 125W or so, but not 150W.
You're also an idiot though if you think that a 75W 8c demo automatically means 150W 16c parts though, since clearly the IO die must consume zero power.

System power you lugnut, according to you the 9900k is 250W TDP

>16/32

double what was shown at CES, brainlet

intel shills on suicide watch

Attached: ryzen 3000.png (1170x1266, 1.51M)

That's not how it fucking works you retard.

There would be zero room for overclocking

>don't talk to me or my son ever again

TDP isn't power consumption. Because 9900k actually can use 250W for the entire chip.

Also, AMD clearly flubbed it. The 9900k in an all core stresstest like Cinebench would easily hit 180W. That means their 125W Ryzen was also for the package, aka the chip, not the entire system. For most desktops the motherboard would likely pull another 20W, plus maybe 10-40W for this and that like SSD (up to 7W), GPU idle, fans, etc.

Attached: aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9DLzkvODA1MjU3L29yaWdpbmFsL2ltYWdlMDA3LnBuZw==.png (1112x833, 74K)

wait for 10nm

Attached: 1506977173618.jpg (882x758, 324K)

I want this much power in my life.

>wait
what? 2015 was 4 years ago

Attached: Intel roadmap Working-on-7nm-and-5-nm-Manufacturing-Technologies-3.jpg (1688x795, 342K)

>More is better

AMD confirmed that the Zen2 system was running with 2666MHz memory, HAHAHAHAHAHA OH WOW INTEL IS FUCKING DEAD

ok quick question.

how bad will 16c/32t be held back by dual channel memory?

Probably not at all, 2990 isn't significantly memory bottlenecked on quad channel even with its NUMA architecture.

thanks. might pick one up. I am a PhD biologist and was intrigued by lisa's reference to molecular interaction simulation.

i went to sci a year ago to ask how to modify a protein for a simple constitutive activation without substrate and everyone got triggered. perhaps I will be able to simulate it now

AMD's 32 core was not held back by quad channel. Even though early reviewers claimed it was because of lack of bandwidth. The real reason for being held back was due to NUMA scheduler implementation on Windows.

It will be perfectly fine with dual channel. Even single v dual yields maybe 10% benefit. But dual to quad will yield very little for the forseeable future of gaming/everyday use.

How come every time I read about computer architecture, hitting main memory is the huge scary bottleneck and increasing the processor cache is a sure way to improve IPC, but when it comes to system building RAM speed and and memory channels have no great impact?

Is it the RAM latency that matters, like if we could improve latency by x3 that would speed things up?

I'm more worried about how the discrete memory controller will affect memory latency.

Diminishing returns.

If you run a system with 1 GB ram and have a HDD, your system will be very slow on any modern scenario.That's because your HDD will be the backup memory storage, aka pagefiles and that's limited by HDD speed. ~100 MBps.

Now if you upgrade that RAM to a 2133 Mhz 16 GB dual channel memory, you now have 20+ GB per second speed instead of the 100 MBps.

Diminishing return hits every system in our life/nature/universe/etc.

For example, economists have found that the income at which people's happiness stagnate is ~$80K/y. At this point, almost all of your every whims are covered, whether it is housing, hobbies, family, health, travel, etc. Someone below at $30/y have all those things to worry about. Someone at >$1M have few more things they will be happy about, like the possibility of not working, ever again. Or dining at a really high end place, or etc. But those improve very little of our insecurities.

Diminishing returns hits on HDD vs SSD as well. Mainly due to the latency from seeking we have on HDD.