Can we actually speed up single thread performance with 5nm?

or is it just another 50 cores?

Attached: 1534076267857.png (656x286, 187K)

You can push clocks higher, potentially, if thats what you're asking.

lol shill thread about this every week. Face it: TSMC's 5nm is behind Intel's 10nm in maturity, yield AND performance.

show them Chaim!

Attached: 1528342425631.jpg (626x657, 81K)

for Applefags, yes

>can we speed up single threaded performance

Yeah by improving on IPC and writing better software, instead of trying to run to the fucking wall on the physical limitations of the semiconductor material--which is what Intel did.

50 cores @ low clocks with well written and optimized code > 2 cores @ 20GHz running shoddily written, poorly optimized code. Those 50 cores @ low clocks would end up using like 1/16th the power of the 2 cores @ 20GHz. This means that you can thereby add more of these into the system and scale it out for the same power/heat envelope of the competing config.

But how would that help gaming?

The issue is that the moar cores can nearly double aggregate perf/W by dropping clocks, while the same amount of cores in a shrunk design might clock 15-25% faster if you're lucky. A lot of clock limits come from things that don't magically get better with smaller transistors, such as wire capacitance and resistance, which waste power and slow signal propagation.

Until suddenly 49 cores are waiting on one core to finish some entirely serial code

I want to see DDR5 and mobos in 2019.

Attached: google.com.jpg (228x221, 12K)

Nice1 Shlomo!

>I want to pay more in 2019.

With a die shrink? To an extent. In general, though, no matter how fast you make the processor itself you'll still be limited by how long it takes to access memory.

Also when it comes to number crunching there's a good argument for replacing IEEE floats with something else that would be faster and more accurate.

>Build a PC
>Gamerbro tries to help
>Nah dude you want this graphics card here
>Nah dude RGB RAM
>Dual Xeon build with integrated graphics and a Intel NUC Hackintosh
>hfw

With a shrink alone, no, but a shrink generally decreases power draw, allowing you to increase the clock and/or expanding the core (like AMD making the FPU 2x wider in Zen 2) without turning the CPU into an expensive to run space heater

Looks like you're in the wrong timeline Stein.

I wouldn't expect it on consumer platforms until 2020 at the earliest.
Which is good for AMD because that's as far as they promised to keep on AM4,

moar's law has a limit

>we
Do you work in the semiconductor industry?

>behind Intel's 10nm
intel's what ?

Cores don't matter
Transistor process doesn't matter
IPC doesn't matter
Power usage doesn't matter
DDR5 doesn't matter
FPU doesn't matter

Attached: 1539398556114.png (500x600, 197K)

Ah yes Intel having slower than their own 14nm cpu at 10nm lmfao

*COPE*

Why isn't the transistor density the only relevant term used by tech publications?
Why do they keep perpetuating "10nm" when it means absolutely nothing to the consumer and doesn't refer to anything physical to begin with?
What stops intel just saying "introducing 1nm chips in 2019" and then going "its just marketing lol"?

IBM/GloFo/Samsung/TSMC have been staying true to agreed upon definitions based on half pitch scaling that the tool makers outline. Intel had better geometric scaling than everyone in the industry, denser SRAM, even got to FinFET before anyone else, so they were standing above everyone. Then they fucked up. Now the rest of the industry is ahead of them even in SRAM density.
If intel tried to pull some bullshit like that then everyone would call foul, and it'd only further sour them to investors who still want a major shake up of management. The big fat curry himself is going to be out in a couple years.

Attached: 1514939307626s.jpg (224x250, 8K)

True, only difference is that TSMC has real process while Intel has fantasy. But yeah other than that you are completely right chap.

I'd say it's too frequent cause of confusion for the consumers. EU should just tell them what is acceptable marketing and what isn't.

ASML establishing node definitions is probably the only thing keeping foundries in check. Back when we were still making planar gates on the high end there were still tons of nuance of various processes. TSMC's 28nm HKMG process produced 33nm wide gates. Global Foundries had a 28nm node that created 25nm wide gates. This is due to the difference in "gate first" vs "gate last" in how these structures are actually patterned and built. Despite this they are both 28nm class nodes fitting perfectly in line with ASML definitions.

The consumer doesn't need to know anything about process nodes at all, this is something only enthusiasts care about. The problem is that most enthusiasts themselves don't bother trying to learn anything above simple popsci review tier.

this fanboy shit is ridiculous i don't mind them if they make a better CPU my only deal breaker is the price so when the time to upgrade comes i usually end up buying AMD because its in my price range.
So if they achieve what even those nanometer mean and can bring a reasonable cpu with reasonable power consumption that has a decent price tag i buy it but until they fill all those categories they are irrelevant to me.
My dream CPU would be something that doesn't require a big fan or fan at all 7W would be fine, miniaturize it and then if users need more graphics then just use pci ports or usb-c.

>Intel's 10nm

They make 10nm?

They're trying really hard lul

>8 Zen cores in a 72mm^2 chiplet

Attached: 1533736321233.gif (494x428, 2.9M)

5GHz is about the limit for production (overclocking is limited to around 5.3GHz).

>5nm
>vs 10nm
TSMC's 7nm is comparable to Intel's 10nm. 5nm would be a fair bit ahead of Intel.

Do you think chinese government steals rnd data from americas and hands it to TSMC?

No. TSMC is in Taiwan. The CPC wishes they had any control over it.

>My dream CPU would be something that doesn't require a big fan or fan at all 7W would be fine
Intel got it

That's not how it works.

Every chip maker uses the same machinery made by ASML in The Netherlands.
They also get a lot of aid from ASML to set up their processes as part of the sale.
So ASML knows what everybody is doing and if there are similarities they would be obvious to them.

TSMC also isn't doing anything cutting edge, they are actually using older machinery which is a conservative strategy that happens to be paying off because everybody else has trouble with the latest generation machinery.

>7
> (OP)
>falling for such obvious bait

>5nm

And checked

doesn't the japanese also make the same machines?

It's not about nm, but "performance per space".
It's said that for every extra instruction per cycle, you have to double the size of the CPU.
Like, a CPU that executes 5 instructions per cycle is literally twice as big as one that executes 4, and one that execute 6 is twice as big as the one that does 5 and so forth.
So they choose to make two CPUs that execute 4 instructions per cycle each, instead of one that does 5, because 4+4 =8 (as long the programmer can take advantage of it)

The thing that will really speed things up is faster memory. On a large scale there's only so much that can be realistically done by "optimizing" code.

I think Nikon and Canon still make a few for personal use yes, but they are nowhere close in resolution or speed.
ASML's market share is well above 95%.

ASML is definitely the only one making EUV machinery, which I think is pretty much a requirement to go below 7nm.
EUV is an engineering nightmare already decades in the making. - nobody will copy it any time soon.

No need, we're going to speed up with Graphene (THz possible).

Let's all laugh at the Intel fags.

Attached: 1541348227681.jpg (595x801, 264K)

delid dis

Why don't we just build gigantic cores?

Because on the same space you would be able to do many, many, many, many cores that have a theoretical speed a lot higher than the giant cores.
The problem is that the logic gets exponentially more complex with more simultaneous instructions because you have to reorganize and manage em in a form you don't get conflicts between the instructions and give the illusion to the program that it was running all the instructions in order on the other end of the pipe.

There is another wacky solution to this problem that is what intel tried with the itanium, where each instruction is VERY complex, so even if you're only executing like 2 instructions per cycle, those instruction worth for 10 x86 instructions.

But good fucking luck writing a compiler that can deal with this shit well.

Alright then, why don't we build a hundred tiny cores and link them in serial? It would bring some nice yields.

You would need a big interposer in the middle to allow em to communicate quickly.
And maybe make it in a H shape just to be sure people understand the chip is actually Huge.

They pay me so I show up. If AMD paid me, I'd work for them instead.