Intel is going to catch up t-

eetasia.com/news/article/18100502-tsmc-to-start-5nm-production-in-april
>intel is going to catch up t-

Attached: tsmc.png (1827x1147, 2.28M)

Other urls found in this thread:

semiwiki.com/forum/content/7544-7nm-5nm-3nm-logic-current-projected-processes.html
youtube.com/watch?v=gg1t_nGaDLs
youtube.com/watch?v=ZzL9K3LpF8o
twitter.com/NSFWRedditVideo

Irrelevant.

TSMC's 5nm is a generation behind Intel's 10nm. It may be called 5nm but it's just marketing gimmicks

Yeah, Intel is irrelevant. They need to pull an AMD and spin off their fabs to some chinks or arabs with more money than sense. The future is fabless.

cope

intel 7nm EUV is better

AMD is going to gas those Jews.

Intel 7nm doesn't exist.

See Intel 10nm is trash

>Intel
>7nm
What

>Intel
>EUV
Nigga...

Attached: free him 2.png (266x243, 87K)

>we are pround to announce our Intel core 7nm processors Lagoon Lake for Q3-2031

Can someone give me a quick rundown? Who is TSMC? They make AMD's chips? Who makes Intel's? Why is Intel so far behind?

Intel makes their own chips. They're far behind because they like to spend time sabotaging competition instead of actually improving their products.

taiwan company of semiconductor for others companies aka foundrie, founders by ex worker Texas instrument PhD MIT and goverment taiwan.

Yes, AMD,Nvidia,Apple,Qualcomm,Bitcon ASIC...

Intel own Fabs themselves

Intel want best node process using old machines, just fuck up and now full panic

it's actually 3 generations behind intel's 14nm+++++++++++

cringe intel shill post

mommy give me zen2 mommy

Which shithole are you from to write like that ?

Intel was supposed to have 10nm products in stores in like 2013 or something but they have been troubled with actually making it work. Latest forecast seems to be 2020 in stores ROFL

He types like an Aliexpress chink

>Quick rundown.
Rundown, yeah. Quick, no.

>Who is TSMC? Whose chips do they make?
TSMC: Taiwan Semiconductor Manufacturing Company. They are a foundry business, exclusively, and avoid competition with their customers by making chips for anyone who will pay.

>Who makes Intel's chips?
Intel obviously designs, but also fabs their own chips.

>Why is Intel so far behind?
This gets more complicated. Far behind based on what metric?

>Geometric Scaling:
For most of recent history, Intel has been the only company to deliver scaling in pitch that corresponds to the named node. Note that technology nodes are traditionally named after the DRAM half-pitch and NOT the minimum channel length. Also note that deviations away from physical sizes implied by the technology node have LARGE consequences on transistor operation. Current technology operates on atomic length scales, which has a huge influence on every aspect of design, fabrication, and characterization. This means that comparison of technology nodes that operate on physically different dimensions, but are given the same node title, is somewhat misleading.

Attached: Goy Box.gif (267x200, 168K)

you have to be stupid to believe this, but in just a year people will know who has the tech, and who's trying to damage control.

>Performance:
Intel has had a long time to optimize their 14 nm node. To their credit, the fact that they've been able to squeeze so much performance out of it is extremely impressive. That said, with dimensional scaling typically comes a few benefits, chief among them being reductions in device power dissipation (ideally). To a lesser extent, you have areal device density, but because device densities are so high now, most chips feature complex power and clock gating that only turn on parts of the die that are needed as they're needed. Anyway, with reductions in device power dissipation, you can begin to reduce your operating voltage or run your clocks incrementally higher. Note that due to the circuitry required to multiply your base clock, lowering operating voltage and increasing clock speed is fundamentally at odds. The industry has adapted a more-or-less stay-the-course attitude towards operating voltage, for various reasons, while attempting to return to incrementally increased clock speeds. Thus, whoever can dimensionally scale first, regardless if it's true pitch scaling, will still likely have a performance advantage due to being able to run cooler and/or at slightly higher clock frequencies. Beyond this, performance begins to bleed into uArch optimizations, which is a whole other area.

>Yield:
This is what has crippled Intel's 10 nm node, and has only recently gotten back on track thanks to internal restructuring and a hefty diversion of resources. They fucked up tremendously with their 10 nm technology group. Their 7 nm looked more healthy than their 10 nm for a while. Think about that.

inb4 shilling
>I give a shit which company makes a buck.
I care about the state of my field, not autistic fanboy dick-riding and e-stating contests.

This is super exciting news. Bringing EUV online at a 5nm node is pretty big, I believe Samsung is still banking on EUV integration at their 7nm lines.
Even with partial EUV on critical layers at 7nm the cost reduction will be tremendous.
Flat out without EUV no one would be scaling at all, immersion is totally broken tech that will never yield, complexity is a total fucking mess.

This is going to pump fresh lifeblood into the GPU market, and CPU market for AMD, and the plethora of ARM SoC vendors who make high performance chips.

Attached: 18.jpg (720x405, 140K)

Attached: Slide5.jpg (1208x680, 91K)

would your rather buy intel or amd stock?

semiwiki.com/forum/content/7544-7nm-5nm-3nm-logic-current-projected-processes.html

Attached: Slide6.jpg (1229x691, 89K)

Look towards the equipment manufacturers/infrastructure businesses, i.e., the technology enablers that don't get as much of the glory as Intel/TSMC/Samsung. They're going to be continually pressed as we move closer to the physical limits of scaling, and every company will need what the equipment suppliers can offer. Do your own DD, but an example is a company like Applied Materials.

>This could be a [GAA?] FinFET [lolwut?] or Horizontal Nano Sheet (HNS), we believe HNS provide a better scaling path to the required dimensions.
>Getting enough drive current with fuck all inversion volume.
>Stacking GAA channels is easier than established vertical integration technology.
shiggy diggy doo

Intel's 7nm isn't going to be anywhere in sight in 2020.

>Risk production

Nonexistent line, total projection based on the author's guesstimations.

At least he is engineer not stock analyst or web dev

>arabs
You mean the eternal enemies of the Jews?
Never gonna happen.

Keked really hard lads.

really?

Attached: Intel vs TSMC.jpg (1054x919, 103K)

RIP Global Foundries FinFET lines. We'll never know what could have been.
Maybe we'll eventually see a 7nm FDX emerge from it in a few years.

Poor fuckers.

Attached: Screaming Geometrically.gif (500x280, 1.08M)

intel shills are sweating

Its just copypasta bait at this point. The best way to get replies in a threat is to post something controversial that makes autists mad.

Lol Intel are gonna be 4 generations in fabrication behind when 3nm kicks off in 202x

fake news.

mainstream gates are at most 300nm.

I think AMD fans are jumping the gun.
7nm isn't here yet and we haven't seen any real world tests.
screaming about how Intel is finished, but not realizing AMD has yet to take back any meaningful market share.
Especially in the server space, they only took 2% when offering products at 1/4 the cost of Intel rigs.
I'm not saying AMD can't win here, but they are still a long way from major adoption.

-a guy who's used a 2600k since launch

IT'S OVER INTLEL FABS ARE FINISHED

Attached: 1525353398975.png (1066x600, 429K)

That's the old Intel 10nm, they gutted it

Attached: 1535561310892.png (731x918, 65K)

Vega 7nm first

The server space will literally take several years for a change as big as AMD becoming the majority share.

>start thread about AMD/Intel/process nodes
>JIDF appears in mere seconds
hello Chaim, you really showed those stupid goyim.

Attached: 1492617271997.jpg (1000x1000, 119K)

shut up goy

>I think AMD fans are jumping the gun.
>7nm isn't here yet and we haven't seen any real world tests.
>screaming about how Intel is finished, but not realizing AMD has yet to take back any meaningful market share.
>Especially in the server space, they only took 2% when offering products at 1/4 the cost of Intel rigs.
>I'm not saying AMD can't win here, but they are still a long way from major adoption.
>-a guy who's used a 2600k since launch
Mate the difference between 16 14 12 nm is profound
It would not matter if Intel had 3nm right now their CPU design is outdated power hungry crap that scales terrible over 4-6 cores

>jumping gun
>buy an power hunger security ridden ancient CPU design instead.

q1 2019

Jew fears the mighty chinkman.

it's not fair intelbros....

Attached: 1534326485713.png (1824x1026, 431K)

>TSMC's 5nm is a generation behind Intel's 10nm.
False. It's probably about two or three generations ahead of Intel's 10nm since Intel's 10nm fucking sucks ass compared to its 14nm.

TSMC's 5nm will prolly be comparable to incel's 7nm, so they'll have a substantial lead since intel won't launch 10nm before may 2019

(NEW!) 5NM DOESN'T MATTER!

You know 16nm, 14nm, and 12nm are all the same right?
They we're renamed for marketing purposes, but are all the same node.
16nm wasn't much if a jump from the last.
I don't believe these 7nm 5.0ghz base clock at 50W claims.

this. Its thinner than shit crust on your are hairs

>Intel 7nm
youtube.com/watch?v=gg1t_nGaDLs

Lol just like Intel's "14nm" stayed the same despite going through lots of changes for 4+ years now

TSMC will literally start producing 5nm chips in 2021

Nobody said 50W

>For most of recent history, Intel has been the only company to deliver scaling in pitch that corresponds to the named node.
Actually they stopped doing that around 32nm.

SHUT UP YOU STUPID GOY

>intel 7nm 2020
AHAHAHAHAHAHAHAHAAHH RIGHT

Intel is 20 years behind transistor technology

Is there any chip currently in stores being made with EUV?

Not yet, but some are taped out.

Attached: intel just.jpg (921x865, 147K)

tsmc is already manufacturing 7nm for companies other than AMD
and there are engineering samples for AMD on 7nm going around too
it also takes years for corporate to start switching their servers en masse, we'll see about that last point when Intel and AMD release their FY2018

Do we have any solid proof rome will pack 64 cores?

Not straight from AMD, but CanardPC the French outlet who published the earliest leaked legit Ryzen review, published a snippet saying AMD's Rome had up to 64 cores.
They also leaked that an intel chip had a a Radeon GPU a full year before that part was revealed. I can't ever remember them being wrong about something like this.
So odds are that AMD's EPYC2 lineup will be topped by a 64c/128t SKU.

delete this

Tfw advanced ovens kill the kikes

>arabs
>eternal enemies of the Jews
lol no

I'll take things that don't exist for $100.

>I care about the state of my field
Can give me a quick rundown of ARM/RISC vs X86/CISC if you know your stuff? Afaik there is nothing stopping companies from designing high performance ARM chips and there isn't something inherently preventing RISC from being used for high performance applications (at least in non server applications because I have no idea about what matters in servers). Wondering if we will see ARM notebooks and what improvements they may offer.

Not him, but literally the only reason is backwards compatibility and licensing/business reasons.
Intel CPUs actually run a RISC microinstruction set internally, and they need additional die space (which implies more power consumption and increased temps) to convert x86 code to this reduced instruction set they use internally.

The only possible performance advantage of using x86 that I can think about is more efficient usage of cache and RAM.

Buy both, no mater what happens you win. I did and I'm loving it. It ain't like AMD or Intel will just close up shop anytime soon. Hang on to that stock and let the good times roll for a good 15 years. The payout at the end of it will be worth it

o ye of little faith who do you think saved their 10 nm

By regression. Their 10nm has same performance as 14nm++

It was originally supposed to be better, but due to competition from AMD, they were forced to scrap their research and go back few steps.

The performance gains on Intel's iterated 14 nm node are extremely impressive, likely more so than would have been achieved using the initial pass of traditional feature size scaling alone absent the subsequent uArch optimizations.

In terms of D, every major player that I'm aware of works in parallel on subsequent nodes, ideally taking learning forward from the previous node. If that methodology breaks down, you have issues. Given the complexity in transistor design and fabrication at these scales, not keeping your baseline processes is shooting yourself in the foot.

That said, these companies handle the real R (prior to ramp) in a distinctly separate way to the D (optimization) that's done ramping up to high volume (the domain of yield/production groups at TSMC/Intel/Samsung). A good example is FinFET technology. It first broke the scene at the very start of the 2000's, and wasn't put into mainstream product by any major player until a decade later. The next likely example will be GAA nano(wire/sheet) technology, which has seen highly visible R&D the last several years and will likely follow a similar timeline.

We're not here to discuss the merits of 14nm. But rather the non-existence of 5nm or 7nm or whatever Intelfanboys want to believe it to exist.

Intel can't even bring together a proper 10nm right now, they have to resort to downgrading their initial plans in order to speed up mass manufacturing, for nothing more than PR gains.

Whats the physical size limit?

Debatable. Some material scientists a few years back presented the idea of quantum junctions. They think they can shift the obits of sub atomic particles and measure the changes to perform logic. If they can actually accomplish that then the nanometer scale will be as irrelevant as the micron scale today.

Carbon nanotube has demonstrated .4 nm capabilities. So that's atomic size.

Atomic manufacturing is stated to come into fruition ~10-20 years from now. We shall see.

The point made earlier was that straight node-for-node comparison is meaningless when nobody uses ITRS-like baselines (be it Intel, Samsung or TSMC) for a node. That's compounded by the differences in the way companies handle their uArch optimizations. So, holistically, better comparisons are Intel's 14 nm vs. (TSMC's and Samsung's) 10 nm, or Intel's 10 nm vs. (TSMC's and Samsung's) 7 nm.

>Intel can't even bring together a proper 10 nm right now, they have to resort to downgrading their initial plans in order to speed up [the ramp to high volume], for nothing more than PR gains.
PR gains are somewhat incidental compared to having acceptable yields and functioning die. Those are both process-related, and when given how far 14 nm is in to its optimization cycle, it's no surprise process improvement can no longer outperform the previous node by itself. 10 nm would need a simultaneous process and uArch optimization. Good luck there (they're fucked Jim).

As said, debatable, but regardless of the mechanism you use for logical operations (charge or spin, etc.), you're fundamentally limited to manipulating (fabrication) on the atomic scale. We can move individual atoms around just fine, but it is a cumbersome and lengthy process, so from a consumer perspective, devices that operate on large-enough scales to be manufactured in high volume are preferred. Current fin dimensions are approximately ~8 nm x ~40 nm x channel length (nm). Scaling below this is resulting in increasing operational issues due to a combination of reasons (random process variation and its effect on device behavior, fundamental device physics, etc.). I imagine that the transition to GAA devices will likely keep similar feature sizes, but I'll have to dig up the IMEC and IBM papers to check on their dimensions again.

The problem with carbon nanotubes and other 2D materials is their lack of charge density/inversion volume, meaning they'd have very poor drive current, which could make fanout infeasible. That's one reason that newer research is looking into stacked nano(wire/sheets). Researchers have looked into stacked 2D materials, but there are other issues as well. One is that 2D materials aren't very amenable to high-volume manufacturing (they're too random too control during fabrication). Another is that a lot of the fundamental device research related to 2D materials isn't anywhere near as fleshed out as compared to new geometries with existing materials. A final issue is that 2D materials have nowhere near the tolerance or robustness of traditional materials when it comes to fabrication, making them much more difficult to work with and also impacting device behavior.
TL;DR: They're a meme. A Nobel Prize-winning meme, but a meme nonetheless. At least from an industry point of view.

Ramiel remake a shit.

>The performance gains on Intel's iterated 14 nm node are extremely impressive
Not nearly as impressive when you realise that they achieved this by adding more cores and more power usage.

The real metric should always be performance-per-watt, as Intel shills all too clearly and repeatedly pointed out every year before Ryzen's arrival.

In that respect, the improvement is marginal and falls within expectations, hardly "impressive"

>Remakes are shit.
Fine, fine. A debate unto itself. But where do you stand on the most important issue?

>performance-per-watt
Agreed. But then you get into the issue of "Performance doing what? For what task? Is that task optimized?" Personally, I've been over it all for a long time now. We're at a state in hardware that is so far beyond what the average consumer will ever need that it's becoming irrelevant who is on what node, ramping up to what, launching what. I've felt that way for at least a decade. People are so caught up in the race to the bottom that they've forgotten that nature is there waiting for us, with her smug grin, ready to smack us with a healthy dose of reality when she says "What will you do now?" Exciting and terrifying all at once.

Attached: Are You Stupid.jpg (620x508, 79K)

The difference between the "public" x86 instructions and the internal instructions aren't actually all that great. Yes, there are quite a few instructions that are microcoded, but those are mostly just the instructions that noone actually uses anyway (like LOOP, or the BCD instructions).

For instructions that are actually used, the greatest difference between the internal operations and the x86 instructions is really just that fetching/storing of memory operands in non-MOV instructions are split off into separate internal instructions, making it a load/store architecture. But even then, in modern designs, the µops themselves actually still encode the complete x86 instruction; it's just being dispatched twice (it's what Intel calls "µops fusion").

The complexity of the decoders isn't so much for "conversion" between x86 instructions and internal operations, but just for x86 decoding, quite simply.

>there isn't something inherently preventing RISC from being used for high performance applications
That is certainly true, there just isn't enough interest in it, because 1) there isn't all that much to gain from it and 2) backwards compatibility. AMD had in fact initially announced their K12 ARM implementation as a sister architecture to Zen, but that seems to have been put on ice for now.
>Wondering if we will see ARM notebooks and what improvements they may offer.
There already are such laptops. For example:
youtube.com/watch?v=ZzL9K3LpF8o

>Remakes are shit
That's not what I said. I only said that Ramiel remake a shit.
>But where do you stand on the most important issue?
Rei.

Attached: 1348537909962.jpg (2280x3740, 1.97M)

>The real metric should always be performance-per-watt
That depends on the application. I can definitely say for my own home computer usage that I don't care very much about performance-per-watt at all (within reason, of course). If I would care about anything, I would care about absolute performance (though in reality, my i5 2400 is more than enough for all my needs).

Why not both?

Attached: Evagirldansen.gif (544x384, 451K)

Are we just fucked and have to wait till new materials once we hit 3nm?

Because polygamy is degenerate.

>GF 3LP
RIP. ;_;

INTEL HAS KELLER

AHHHHHH