Why we are stuck with 5Ghz? You could get 5Ghz even with good 2500k which is 9 years old cpu already

Why we are stuck with 5Ghz? You could get 5Ghz even with good 2500k which is 9 years old cpu already.

Attached: cpu market.png (898x614, 64K)

Other urls found in this thread:

cpubenchmark.net/market_share.html
twitter.com/SFWRedditImages

Thermodynamics

Exponential heat increase for linear clock speed increase

the bottlenecks are still RAM, storage, video hardware. we're reaching a point of diminishing returns so it's just not worth it right now.

Because Dennard scaling is over.

How about 7nm? Can we get over 5Ghz with that?

In theory sure, but in practice its better to just cram as many transistors you can onto the die and make the chip run at efficient power levels.

Because going from 4 cores to 6 cores gives you a more meaningful performance increase than going from 4.5 to 5 Ghz
This is literally the same shit that's been happening since the niggahertz wars of the 2000s. Frequency isn't all.

Problem is that most of shit is just single core shit because lazy shit programmers.

Light travels at finite speed, and we're not far from the limits at which its possible to get a signal from one end of the chip to the other within one clock cycle. You can paper over this a bit by having multiple clock domains - eg, the cores in AMDs tiny chiplets can each run at high speed and communication within, and between, the larger I/O die can be on a different clock and slower - but that gets you less than you'd think, and comes with its own drawbacks.

Light in a vacuum travels just under a foot in a nanosecond, which equates to a 1GHz speed. So if you want to run your chip at 10 GHz, nothing in that clock domain can be very far from anything else, light will only travel a bit more than an inch before the next cycle. Remember that you need some slop space to allow things to settle each cycle and allow for manufacturing imprecision, and that electrical signals traveling in a conductor go somewhat slower than light in a vacuum.

>Problem is that most of shit is just single core shit because lazy shit programmers.

t. someone who doesn't know how to program

Attached: CyowSGh.jpg (768x960, 72K)

if the speed of light is the maximum speed why don't we just make light go faster?

Europoor Discovers MHz Myth: The Thread

why tf that graph is not being updated anymore? Are they trying to hide something?

Do you people even know what parallel computing isby the way AMD will btfo intlel in the coming years

Asynchronous circuits will make a comeback, just you wait.

it was true a decade ago faggots.

Have fun using a decade old software and hardware then?

t. Plays dwarf fortress all day everyday and thinks companies offering more than 2 cores are detrimental for his niche case

It is being updated though
cpubenchmark.net/market_share.html

Attached: moar.jpg (800x554, 128K)

>Problem is that most of shit is just single core shit because lazy shit programmers.
No, most shit is single core because there's no reason to parallelize everything, no one will make a calculator that can take 8 threads, there's absolutely no reason to. If you are talking about games, that's simply not true anymore due shit like DX12 and Vulkan.

Not to mention there are literally hundreds of small processes that run in the background that only run a fraction of a second, to a few seconds. Multithreadding those doesn't help at all, but having extra cpus/cores does. This is why workstations have had multiple CPUs since literally the 486 days. But most of /g has no idea.

Because going faster will rip the space time continuum and you fucking your dog.

oh fuck, I didn't even consider it
does the pentagon know?

makes sense, donnie darko foresaw this in the underrated sci-fi/comedy movie "The Salvation", starring Nickelback and danish Actor Mads Mikkelsen as Jon Jensen.

I'm not a robot

Attached: two cats.jpg (2043x1560, 854K)

P = i^2r