7NM COMING THIS YEAR

AMD GOES 7NM IN 2018 INTEL 10NM IS DELAYED ABORTED GARBAGE FAILURE FOR ALL ETERNITY

Attached: GF-7nm.png (1356x615, 57K)

Other urls found in this thread:

hothardware.com/news/amd-7nm-zen-2-cpus-sampling-this-year-for-2019-volume-launch
ark.intel.com/products/93791/Intel-Xeon-Processor-E7-8893-v4-60M-Cache-3_20-GHz
en.wikipedia.org/wiki/Dennard_scaling
youtube.com/watch?v=XAx9G5PqBzM
twitter.com/SFWRedditImages

>this year
They just released ryzen+ its going to be next yet

That table is from an article that is almost a year old.

the absolute state of Jow Forums

hothardware.com/news/amd-7nm-zen-2-cpus-sampling-this-year-for-2019-volume-launch

>8C 16T
>16MB L3
JUST

>sampling
what's wrong with it?

So what does 7nm give in terms of benefits?
It requires less power and can fit in a phone?

you can basically choose between higher performance (while using the same power as 14nm) or less power usage (while at the same clockspeeds, compared to 14nm)

Thanks

Smaller chips = Lower power usage
Lower power usage = More room for improved clockspeeds
Improved clockspeeds = faster performance

I came hard
Literally only inbred cucks or Apple users would still buy Intel products in 2018 and further

Thanks #2

but apple is jumping ship from intel

Apple hates Intel and Nvidia. They work with the former out of choice but once Thunderbolt at least gets implemented on AMD they can start considering them if they're not already deep in working on their own CPUs. There's many small technologies that have been baked into macOS that may make a difference however. Such as QuickSync, which is an Intel technology as well as quite a few more things. It's all just software though, the changes to be made.

>thunderbolt
>amd
Not happening
Apple will make their own new proprietary technology to replace it once they make their own processors

god Jow Forums users are the most braindead people ever

bad goy

>what's wrong with it?
we don't live in 2009

you can't just cram a lot of cache into a cpu and expect if to work better

it's L3, of course you can

You would think an absolute measure, like nanometers, would be fairly easy to map to hard criteria but marketing and such rarely makes it so consistent. I'm not too fond of Intel, but it must be beared in mind that there are major differences between what other foundries are calling "x"nm, and how Intel classifies it (ie via actual fin pitch, etc).

I haven't looked into it for a while, but last I knew, uncharacteristically, Intel's reported scales were actually the most honest and accurate. Which is why they seem to struggle with scales other foundries appear to be blowing right by.

I'll have to check on this again, but it makes sense. Especially given that we're nearing the point when classical mechanics no longer accurately describe state of a given system. Yet chip size seemingly continues to decrease, and an increase in parallel error control circuits are nowhere to be found. With little consequence.

AMDs 7nm wont be as good as Intels 10nm when it launches

they will have a brief lead, if any at all, and then get washed right back into the garbage bin where they belong.

Hi Brian.

>7NM
That means nothing at this point.

>AMDs 7nm wont be as good as Intels 10nm when it launches
yeah, it'll be considerably better, since 10nm is slower than 14nm++

>Apple will make their own new proprietary technology
>more dongles

Not sure even applefags will tolerate another standard change this quickly. Maybe in 2020, but not 2019 and definitely not 2018. The average macbook fag literally just fuckin bought a tb mbp.

AMDs 7nm will be without a doubt better than intels current 10nm. Just look at the 10nm i3 that doesn't have a igpu.

>source
>my bull works at intel and thats how i have performance metrics on intels 10nm
>its also why i hate intel so much :(

You're right, those "nm" figures are pretty much useless by now, more a marketing tool than anything else.

>first 10nm cpu that shows up is a 2.2GHz dual core with no igpu
>this means intel's 10nm is really good

based thanksposter

he doesn't know

Attached: 1489160516428.png (800x612, 245K)

Intel's 10nm is more dense even compared to IBM 7nm. The "size" of the node (e.g. 14nm, 7nm, 10nm) are marketing names, it has no bearing on any of the transistor features though. Historically Intel had about 50% shorter gate lengths and a somewhat smaller advantage (~10%) narrower fin pitches, and metal/gate pitches. 7nm will still be slightly larger than Intel's 10nm process. However, GoFlo will have way smaller CMOS SRAM feature sizes, which will affect cache patterning. However, considering that AMD has been playing on the backfoot in node and still competing suggests that the architecture they designed has major performance advantages over Intel's.

? it's not AMD's process, AMD is fabless, retard. Feature size will be very, very slightly larger. AMD has been on always been at a node disadvantage and are still competing with Intel right now. Imagine once they are at parity.

2020 gonna be a big year

Also my balls itch

A 7nm chip will use smaller microtransistors. Smaller means you can fit more on a chip, and more microtransistors means a more powerful CPU, even at the same clock speed.

Thanks

Attached: indulge the bulge.jpg (576x768, 80K)

Someone post that GloFo 7nm vs 14nm fin height comparison.

t. seething intlel niggerfaggot rajakesh kike incel feminist cucksoy

Attached: 1525362785884.png (552x661, 288K)

All of 'em are "liars"
The naming convention broke away from actual feature size two decades ago when contacts and wires couldn't be scaled down with transistors, before companies moved to cobalt and high-k. 250nm could still have had a 350nm backend.
The association whom categorizes node size has been using "equivalent feature size via equivalent electrostatic characteristics" ever since.
What they base this equivalence on? Nobody knows. A likely guess is they started with whatever industry standard litho process (node size) for the time and then made projections using ASML and other fab equipment manufacturer's progress with better machines.

Me, I'm looking forward to Samsung's claimed Gate-All-Around 5 or 3nm lithography. GAAs are the absolute pinnacle of transistor design, but the most difficult to build.

>tl;dr
you're a faggot

>bulge
OwO

I'll take the letters BS for $500

No room on the die

Chips are well under 100mm^2

>Improved clockspeeds = faster performance
improved clockspeeds = more heat
FTFY

>
>>Improved clockspeeds = faster performance
>improved clockspeeds = more heat
smaller transistors = smaller die = lower propagation delay and lower drive voltage = less current = higher clock speed = same heat
FTFY

why am I talking to the niggest of nogs on Jow Forums today?

Thunderbolt is already co-owned by Apple. It was developed by intel with Apple. AMD just needs to bring back their retarded XGP project but actually you know, make it functional and good instead of creating it and then proceeding to drop ALL support for it before partners even had a chance to put products for it out.

Attached: _CrZVbIHbuTpZ6FPtghKuHUokeitH7OEJKjK4uffLZU.jpg (727x2045, 171K)

Wtf am I reading

are you retarded?
voltage is not what dictates heat output you tard.
evidence= intels jot 90C at 1.4v while fx9590s dont even hit past 70C at 1.55v

Call me when R7 beats i7.
Call me when Threadripper beats i9.
*presses snooze*

the 9590 doesn't go above 70C because they're actually soldered

>voltage is not what bad math I have I'm a retard
entirely different architectures on entirely different fab nodes yadda yadda etc.etc.
short story you're dumb and it hurts

>he only runs one thread at at a time
KEK

Do people often use more than 12? I've pushed most of my always-on stuff to my servers.

>not running multiple VMs for your hacking lab
heh, pleb

didn't we say this about AMD when AMD64 became a thing, then they managed to make themselves completely irrelevant for a few years thanks to Bullozer

Is it really 7nm or is it only 14nm and they just call it 7nm ?

I don't give a damn as neither will affect me at all.

Attached: 1469843979355.png (304x366, 231K)

Source is Intel's own slideshows, you retard. They've admitted that 10nm performance is smaller than 14nm++ on first generation, and is only on track to supersede it by the 2nd generation.
Intel can't lie about that kind of shit to their shareholders, they'd get sued to infinity.

Its 10nm but they call it 7nm.

More like, Athlon was pushing Intel's shit backwards in every single use case, and Intel fucked over AMD by buying out a bunch of OEMs, which resulted in them getting sued for anticompetitive practices. That shit happened over decades. It's not even a conspiracy theory, they lost the fucking cases and had to pay.
Learn a bit of history before you comment, retard.

Apple only uses Intel probably due to contract. They will probably switch to AMD as soon as they can and keep charging premium prices while using half-price cpus and apus.

There's precedents, too. They dropped Nvidia a long ass time ago and use AMD graphics in anything that doesn't use Intel's IGP.

You are correct, however in terms of density other foundries' 7nm process is similar to or slightly better than Intel's 10nm.

You snoozed past the R7 and Threadripper launch, you are already too late.

Except bulldozer was an objectively shit architecture, and I say this as someone that owned an FX-6100. At least in that particular era, AMD's slide towards irrelevance was completely deserved.

nice trips, but they won't release Zen2 until yields are better in 2019

>P=IV
so more volt more heat

7nm server chips are supposed to come out this year. The Ryzen lineup will be early next year.

GloFo's 7nm is equivalent to Intel's 10nm. AMD will actually end up beating Intel's process with the next release since Intel is still stuck on 14nm.

I said servers.. I run VMs on those. Do you still run them on you PC like a wee bab?

>doing multiple things slowly
R7, still behind the 8700k, Threadripper, still behind the i9XE. What are you on about.

>doing multiple things slowly
You do realize that Ryzen crushes the 8700K when properly overclocked, right? The 1800X still loses slightly in games but beats the ever loving fuck out of the i7 in workstation workloads.

If you want to talk about servers, AMD is killing Intel until you get to the Platinum lineup or need AVX-512.

They're sampling 7nm this year on the enterprise side, where as Intel is still on 14nm+++++++++++

>7820X
>8C 16T
>11MB L3
>7900X
>10C 20T
>13.75MB L3
JUST

Attached: 1520981492246.png (439x290, 141K)

>R7, still behind the 8700k, Threadripper, still behind the i9XE. What are you on about.
>GayMen

It will be fun when they start to measure the chips in logic component height, given the tendency now is to make more complex discrete components.
"This chip is 200nm tall"

It will be a few more years until they jump ship from Intel anyways

Penryn went to 3 MB/core from 2 MB/core with some performance improvements vs Conroe. They can increase the L3 again to 2.0-2.5 on 8820X / 8900X

ark.intel.com/products/93791/Intel-Xeon-Processor-E7-8893-v4-60M-Cache-3_20-GHz
60MB L3 Quad core

Attached: 1514337071020.png (644x500, 66K)

Increased voltage increases heat, which increases parasitic capacitance. Increased frequency is increased heat as well.

Infinity fabric over usb, imagine.

its an ibm process so its black magic

Attached: oajvimprrw3z.png (1562x332, 129K)

It can happen, Thunderbolt is royalty free now

Zen 2 predictions:

5-6 core ccxs
up to 5ghz clockspeeds
major architectural changes to the core including increased cache sizes
infinity fabric improvement
uncore and core manufactured on separate dies (which will add latency)
manufactured by TSMC

screencap this because it's prophecy

Meanwhile in reality

> up to 5ghz clockspeeds
ids nod bossible en.wikipedia.org/wiki/Dennard_scaling

PEPSY BECOME CHEEPER AND HEALTHY
COCACOLA JUST ADD WATER AND FILM NEW COMMERCIAL

see

i want to believe the 5Niggahurtz base but am really skeptic. ES hopefully soon

Attached: krieger.gif (500x281, 937K)

>4C8T
>3.5 boost
>140w tdp
what?

apple is already going to phase out intel CPU for their own CPU probably the macbooks first then the pro level stuff once they make them run good enough.

Can't wait for der8auer do expose them again

doubtfull it was made for the powerpc uarch i doubt amd will crank up the clocks so high without being a housefire

It's a little under 8nm.
And Intel's 10nm is ~9.5.

I've always wondered, what do the pluses even mean or what are they supposed to mean

Dear God this is glorious can't wait to get a 5800 Ryzen in 2020 that crushes Intel's 10nm garbage
>mfw mcm gpus will be a thing by then

Attached: 1527169776573.jpg (334x334, 88K)

I used to like intel cpu a lot but I won't buy 10nm when 7nm is available.

it's a more refined/tweaked version of the process

>intel 10nm is more bettar than anyone else's
>obligatory reply
youtube.com/watch?v=XAx9G5PqBzM