AMD's CEO Dr. Lisa Su to Host a CES 2019 Keynote: 7nm CPUs and GPUs

anandtech.com/show/13425/amds-ceo-lisa-su-to-host-ces-2019-keynote-7nm-cpus-and-gpus

Prepare your anuses. 7nm is coming.

Attached: amd_lisa_su_official_678x452.jpg (678x486, 27K)

Other urls found in this thread:

ark.intel.com/products/136863/Intel-Core-i3-8121U-Processor-4M-Cache-up-to-3_20-GHz
forums.anandtech.com/threads/cascade-lake-beats-rome-in-the-race-for-2019-tacc-supercomputer.2553293/
techpowerup.com/248008/intel-at-least-5-years-behind-tsmc-and-may-never-catch-up-analyst
ark.intel.com/products/136863/Intel-Core-i3-8121U-Processor-4M-Cache-up-to-3-20-GHz-
ark.intel.com/products/137977/Intel-Core-i3-8130U-Processor-4M-Cache-up-to-3-40-GHz-
twitter.com/SFWRedditGifs

Oh yeah! Well Intel 10nm,, which is in fact others 7nm is here already, see: Intel Core i3 8121U

ark.intel.com/products/136863/Intel-Core-i3-8121U-Processor-4M-Cache-up-to-3_20-GHz

> Products formerly Cannon Lake

Based and redpilled

AMD always has a pretty good CES show, that and HotChips.

Keynote:
Lisa Is admits to being a Chinese spy, and fly's back to China where she continues to manufacture hygon dhyana CPUs for the Chinese government.

Intel's 10nm vs intel's 14nm++:
>same power
>higher price
>no iGPU
>lower frequency
>can't produce more than 2 cores
I wonder how AMD will recover

DELID DIS

Yeah it's coming, any year now. There is always just that next product to wait around for around the corner with AMD.

OH NO NO NO NO HAHAHAHAHAHAHAHA!!

Attached: 1505147990486.jpg (329x329, 57K)

Attached: 1497689967244.jpg (638x599, 123K)

>She

>announcing 7nm
>can't even compete with Intel's 22nm

What did AMD mean by this?

Attached: 1536933295046.png (1109x3646, 376K)

They mean they need to compete and win against those by taking advantage intel have their head stuck to the butt.

THIS CAN'T BE HAPPENING INTELBROS MORE NIGGAHURTZ MOAR COARZ MOAR HOUSEFIRES

Attached: untitled-3.png (682x798, 38K)

None of those intel cpu's are overclocked.

Attached: Captur.png (458x898, 55K)

Delete this sir. No one must know.

So will Rebrandeon finally stop?

15W for JUST 2 fucking cores at 2 GHz. I smell AVX-512.

Attached: overclocked-7980xe-power.jpg (700x486, 38K)

Intel made ridiculous progress with their 14nm Trigate process, but clock scaling with voltage seems to have really peaked around Kaby. Everything they've released since then has had almost no discernible improvement on the high end of their power curve, they've only been improving power for low clocked parts slightly. A decent binned Kaby i7 was pulling 12.5w per core, full load, at 3.6ghz. Going up to 4.2ghz saw power consumption absolutely leap to 21.5w per core, not including uncore power draw.

They're not going to be able to have their 10nm parts reach parity with their own 14nm parts for a long, long time it seems, if ever.

I was right about the AVX meme bullshit too. Dis gon be good.

Attached: Screenshot_2018-10-05-17-19-23.png (720x1280, 128K)

AMD will likely have their 7nm desktop Ryzen parts in full availability before intel can even start building stock of a 10nm i7 with acceptable clocks. I'm not holding by breath for real availability of 10nm desktop parts until early 2020.

I think we're going to see a repeat of their early 14nm process. It originally launched with the small die two core Broadwell CoreM. They had 20 different SKUs of the same die because yields were all over the place, they couldn't just consistently get chips to hit target clocks at a consistent voltage, so they binned all these wildly varied chips into separate SKUs.
Then desktop Broadwell was delayed the better part of a year. It finally arrived, in limited availability, yields were so low they were losing money on every chip sold, so they made it EOL after just a couple months of being on sale. It took 6 months for Skylake to actually be available in stores after its paper launch.
It seems people are really quick to forget how fucking terrible intel's 14nm fiasco was.

Their 10nm process will probably be exactly the same.

>only 494 at 4.9 niggahurtz
LMAO

7 year old processor heres your (You).

Can't wait for all the avx-512 shill commercials.

>Intel's life is now literally dictated by whether people will take avx-512 seriously when GPUs exist

Attached: 1497835428568.jpg (1906x1536, 864K)

>amd just barely beats a 7 year old CPU in single core performance with it's top end CPU
Is this a joke? IS THIS A FUCKING JOKE?!

>It seems people are really quick to forget how fucking terrible intel's 14nm fiasco was.
>Their 10nm process will probably be exactly the same.
Lol no, their 10nm is WAAAAAAAY worse, Intel was able to fix most of 14nm flaws in 1-2 years, 10nm was supposed to come out on fucking 2015 and it's q4 2018
And things will only get uglier, expect Intel 7nm in 10 years, not memeing

intel barely beats an intel from 7 years ago

Wow the AMD flagship overclocked to the limit of what's physically possible is still slower than a 3 year old Intel. Intel btfo?

You're right, I'm giving them too much credit.
We all know that intel isn't going to leapfrog their 10nm process with a 7nm node after dropping this much cash on it. They'll be offering 10nm++++ in 2023 while they're 7nm fabs flounder yet again.

>discontinued
LMAO INTLEL KEKS BTFOREVER

Taiwan, not china. And still more of a clapistani than you, shekelstein.

NOT FAIR

Attached: dead.gif (484x563, 104K)

>Intel just barely beats a 7 year old CPU in single core performance with its top end CPU
Is this a joke? IS THIS A FUCKING JOKE?!

Intel's fabs will never recover, if they're gonna milk 10nm for a decade while TSMC goes 5nm and 3nm then they might as well spin off their fabs. They're finished.

Did you finally lost your marbles you fucking kike? 7nm is coming on january. Prepare for even less shekels for your 24/7 shilling.

yawn

user, read that post again.

I'm I was making fun of Intel

INTLEL GPU SUPERPOWER BY 2020

Attached: 1514937277489.jpg (626x657, 81K)

Reminder that this man, Venkata Renduchintala, is King Poop of intel.
Press F to flush him down the loo

Attached: please to do the circuits sir.jpg (1500x1125, 174K)

>intel is THREE TIMES slower than ryzen
QUICK, BRIBE THE LAWS OF PHYSICS

Attached: untitled-12.png (678x924, 57K)

is her face sliding because she sees too far into the future

>Venkata S. M. Renduchintala, also known as Murthy, serves as Director of Accenture plc since April 12, 2018. He is Chief Engineering Officer and Group President of the Technology, Systems Architecture & Client Group at Intel Corporation.
nononono

???

Attached: ayymd.png (976x797, 43K)

MOMMY

Attached: drlisasu2.jpg (720x676, 125K)

;^)

whoops forgot pic related

Attached: intel-xeon-sp-versus-amd-epyc-perf-per-core.jpg (996x406, 49K)

intel xeon: 1,000+W
AMD epyc: 15W

forums.anandtech.com/threads/cascade-lake-beats-rome-in-the-race-for-2019-tacc-supercomputer.2553293/

Attached: 41945959_571174173318352_8156229855604719835_n.jpg (1080x1080, 97K)

>intel needs 2x $12000 CPUs to match Epyc

Attached: 1533256280962.jpg (601x601, 27K)

Probably not that extreme, but Intel architecture is very inefficient.

Attached: 1537199169279.png (1920x1080, 1.04M)

Nobody said intel was slower you mong, but AVX-512 comes at a huge power consumption penalty that Epyc and Rome don't have to deal with which is why people will still refuse to buy intel.

A server blade consisting of 128 AMD cores consuming like 200W is magnitudes better than a 128 intel cores consuming like 500W.

>AVX-512
Oh look it's that dead instruction set people abandoned over baked GPU calculations.

More like 128 Intel cores using 800W user, see

jesus

Attached: HardAbandonedChinesecrocodilelizard-size_restricted.gif (435x250, 3.25M)

>I'm not holding by breath for real availability of 10nm desktop parts until early 2020.
They have announced that there will be 10 nm "products" available for Christmas 2019, but as you say, that most likely means laptops with low-spec mobile processors. I'm not expecting desktop/server parts on 10 nm until well into 2020.

>NB4 intel has her assassinated. And makes it look like an accident.

Attached: 1491178899958.png (720x751, 254K)

AMD's 7nm is behind Intel's 10nm though. Hell, even their 5nm is going to be behind Intel's 10nm.

>2x698mm^2
>14nm
Fucking lmao, literal datacenterfires.

|
|
|>
|
|
|
|

With 7nm Intel is 20 years behind TSMC, and with 5nm they will pretty much be 50 years behind.

>AMD's 7nm is behind Intel's 10nm though. Hell, even their 5nm is going to be behind Intel's 10nm.
Literal bullshit. Intel has had to reduce the density of their 10nm node to even slightly make it. Intel's 10nm is more like GloFo's current 12nm.

INTLEL IS FINISHED
techpowerup.com/248008/intel-at-least-5-years-behind-tsmc-and-may-never-catch-up-analyst

Dam, brian bailed JUST as intel was starting to tank. I'm starting to think he cheated on purpose.

lol no, that's just the marketing nonsense. In reality GloFo's 12nm is a generation behind Intel's 14nm

bullshit

>(((analyst)))

Attached: 1538532893476.png (659x430, 61K)

Intel's specific implementation aside, don't discount CPU-side wide vectors as a gimmick. It has a number of usecases that GPUs dont fit well:
>Vector calculations intermixed with scalar code in such a way that each vector operation is too small to be worth packing up and instructing the GPU to pick it up
>Vector code that has complex control flow
>Code that requires more memory than the GPU has benefits tremendously from being able to use virtual memory, which GPUs still don't offer in a practical manner (they do have page tables, but rarely a good way to handle page faults); large-scale raytracing is a common example of this
>Just using the wide registers to do memcpys

All of this is more interesting in the context of RISC-V than that of x86, though. The RV vector extension is actually super interesting, with much more implementation-side flexibility, not least the ability to implement arbitrarily wide vectors transparently to software, but for many other reasons as well. I wouldn't be surprised if the vector extension turns out to become RISC-V's killer feature, appearing in supercomputers and render farms, and working its way down from there into ordinary computers, replacing GPUs with just really good vector-enabled CPUs, for everyone's benefit.

>IT'S NOTHING GOY SHUT IT DOWN
OH NONONO JUST WAIT FOR 10NM GOY IN 2019 DEFINITELY

Really, in what way?

Attached: aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9PLzUvNzY1NTA5L29yaWdpbmFsL2ltYWdlMDA4LnBuZw==(1).jpg (712x1435, 193K)

goy ples delid dis

>JUST HODL
>ZOOM OUT BRO

>around 100MT/mm2 even relaxed
>12nm Glofo
man, you are seriously dumb

That'll never happen though as things like ray tracing (although 1-3 spp) HAVE been implemented on GPUs starting with nvidia's shitty RTX. This isn't a fucking game anymore, GPUs are out for blood and can turn x86 into a scrapheap at any moment if engineers start fusing CPU tasks in GPU cores.

Pretty worthless metric on its own, without corresponding performance figures. Since this is AVX, it wouldn't be surprising if the ratio of Zen's to Skylake's performance is greater than the power consumption ratio.

>GPUs will replace CPUs - leather jacket man

lmao wrong, very very wrong

Attached: x265-2.jpg (600x600, 102K)

But nVidia's RTX has the exact same problem that I just wrote about: The entire scene has to be able to fit into the GPU's physical memory, and there isn't even nearly enough physical memory to do that on stuff like film productions.

That's not the same benchmark, idiot.

VLIW days are over, x86 days are numbered - quote me on this when AMD's HSA APUs fucking murder everything we hold dear

Point is there's nothing wrong with 12nm if it results in insane power efficiency and performance/watt. Isn't that what really matters in the end or are you that retard that thinks we should have 10kW 10 GHz processors?

>x86 days are numbered
>AMD's HSA APUs will kill it
You obviously have no idea what you're talking about. The very idea with HSA is that it contains x86 CPUs. Well, perhaps not x86 specifically, but CPUs, whatever the ISA.

Not surprised in the slightest desu, single thread performance is finished, just put billions of cores to work together just like the brain cells work in the brain.

>Point is there's nothing wrong with 12nm if it results in insane power efficiency and performance/watt.
I'm not denying that, I'm only saying that nothing that has been posted allows such a comparison.

Right but professional implementations of could borrow memory from something else like system RAM or optabe/m.2s in RAID. The technology is there.

>i don't know amdahl's law, the post

No they couldn't. RTX needs the memory in VRAM.

Or in case you've switched topics from RTX and are talking about completely speculative, future architectures, then RISC-V, as I said, seems to be the closest to realize that.

>just make a handful of super powerful brain cells bro i'm sure it'll work great xdddddddddd

For NOW, at any moment we could see engineers fuse x86 components onto a graphics chip itself and give it an assload of HBM that also specializes as system RAM. What I'm saying is a new microarchitecture could soon replace both a gpu and cpu.

RISC-V can't touch modern GPU performance/watt. Anyway my bet is on a completely new alien architecture that replaces everything, maybe neural based and a possible candidate for scary shit like skynet.

>at any moment we could see engineers fuse x86 components onto a graphics chip itself
Perhaps, perhaps not, but that's not HSA.

true

>RISC-V can't touch modern GPU performance/watt.
What makes you say that? Are you just talking out of your ass, or do you have some data to conclude this? I can't see any intrinsic reason why that would be the case.

>Anyway my bet is on a completely new alien architecture that replaces everything, maybe neural based and a possible candidate for scary shit like skynet.
Whether or not that may be true in the future, there are no traces of it yet, even in research.

IIFAB

Furthermore, let alone any such implementation, there isn't even any specification or proposal for that yet. The closest thing to it is, as I said, RVV.

Whether Jow Forums likes it or not machine learning has a lot of potential

>risc-v hasn't touched mainstream HPC and server market since WW2
>modern x86 ISA is basically risc cores with cisc interpreters and AMD has reached peak efficiency

Seriously, Intel 10nm is crap. Even their own marketing admits that Intel 10nm is a generation behind their 14nm++, and this is an old graph. No doubt they've made further concessions weakening their 10nm now that it's later than ever.

See ark.intel.com/products/136863/Intel-Core-i3-8121U-Processor-4M-Cache-up-to-3-20-GHz-

And then ark.intel.com/products/137977/Intel-Core-i3-8130U-Processor-4M-Cache-up-to-3-40-GHz-

The 10nm chip has the same power consumption as 14nm, but the 10nm chip has 200MHz less max boost clocks, and its GPU is disabled because it's non-functional.

So that is, their 10nm chip has lower clocks AND no GPU compared to their 14nm chip, while consuming the same power. Both of you are fucking delusional if you think Intel's 10nm is good.

Attached: 9110881-15271462873472602_origin.png (1467x722, 201K)