Intel's 10nm Finally Yields a Chip: a Infamous Cannonlake i3 8121U

>2.2GHz Dual Core with a boost up to 3.1GHz
>no iGPU
>avx-512 (why the fuck i need avx-512 on a mobile cpu?)
>2 years late

tech-toniks.blogspot.com.br/2018/05/intels-10nm-finally-yields-chip.html

HAHAHA, INTEL IS FUCKING PATHETIC!

Attached: isscc-2018-intel-10-sram-testchip.png (500x510, 437K)

THIS CAN'T BE HAPPENING

Attached: 1506383390040.png (992x1043, 614K)

That's not bad at all at 4 watts.

Looks to me like they're attempting to make a low power server CPU. But that's just a guess.

took them 2 years to yield a single workable chip out of all the waffers?

It's 15W lmao

Heck, even at 4W it would be bad since no iGPU

At 4w it would be amazing. Supporting avx-512 is extremely expensive power-wise, granted 15w is more in line with what I'd expect from an engineering sample like this. I'm not bothered by the lack of iGPU, but it definitely means that they're gonna have to fight to show me what the use case is for this chip.

Honestly, this thing seems more like some sort of co-processor than a CPU. Shit's weird.

what kind of branch of 512 is supporting? if its full avx 512(doubtfull) there is no way in hell this is a 4watt
more like 40

I've never had so much Schadenfreude for a company before. It's truly pathetic how fucked Intel is for the foreseeable future

Agreed. I haven't seen any confirmed sources for power consumption. Just a bunch of brainlet anons shitposting.

I'm chalking this up to a waiting thread as per usual.

not news. we knew a year ago that intel cannot produce anything useful on their 10nm node, and this is confirmed by all those delays.
for the record, all intel igpus are on an earlier node than what the cpu is.
what does this mean? is intel incapable of mixing 10 and 14 nodes? sounds like it.

AVX512 uses nothing while its unused, that's why it has a 1000ms delay before it fully fires up on CPUs, making it worthless for bursty loads.
Also it's a U series CPU, meaning it's 15W

In the meanwhile AMD releases 7nm 2019, if were lucky even with 6 core CCX

what? all their integrated GPUs are fabbed on the same node as the CPU. You can't mix and match processes on the same die like that and they've never made a two die CPU+GPU processor.

15w TDP, no one has benched it yet
btw Kaby has a 2.2ghz bclk 3.4ghz boost with integrated GPU at 15w TDP; this Cannonlake is useless

>You can't mix and match processes
Clarkdale was CPU at 32nm and GPU at 45nm, though I'm unsure if that was a monolithic die

ARM is ded then

22x base upto 31x turbo for nonavx
max 8x for AVX loads.

oh thats the Northbridge transition processor where they put that on the same PCB as the CPU rather than the motherboard.

Wouldn't really count that as it also had the RAM controllers and a shitload of the IO stack

>800mhz AVX2

Attached: Laughing_cars.jpg (502x600, 152K)

>dorito cpu chip

Then why the fuck did they invent that shit called avx-512 if it's not optimizable? I really don't understand.

It's literally ivy bridge tier performance.

It's already confirmed to be 6 core CCX or more 4 core CCXs per Zeppelin die since there is a confirmed ebyn at 48 cores.

*fapfapfap*
7nm 12 core ~4.8 GHz

my dick almost explodes

what kind of temperatures can we expect from prime 95 small ffts with avx 512?

just a meme to sell h/w to idiots.
have you seen all those inteltards going nut about avx512?
they think that because 2 < 512, avx512 is better.

So this is why people call it "Intel's Bulldozer", huh? I didn't knew it was this bad, but man, they fucked up big time.

>*fapfapfap*
back to plebbit you retarded, underaged fuck.

Attached: 1e0.gif (350x233, 348K)

wew
no, this is far beyond pathetic, this is just sad. or extremely funny if you got no jewtel hardware or stocks and dont plan on getting any anytime soon

THIS CAN'T BE HAPPENING

Attached: 1506383390040.png (992x1043, 614K)

>Schadenfreude
>for a multi billion dollar multinational
>that is still seeing revenue gains every new quarter
I mean you can pretend you're better off than its CEO and all the major shareholders that are rolling in dosh but I doubt they really care about some neckbeard shitposting on Jow Forums.

No amount of money will save them from the heat death of the universe.

>no iGPU
>dual core
Nobody sane will buy this.

>not bothered by the lack of iGPU
So it's "power efficiency" is irrelevant then since you need a separate GPU.

>"back to plebbit"
>post Trump image
Pot, meet kettle.

Why do leftists try to pretend plebbit is right wing?

Reported for antisemitism

>>>/the_donald

You do realize it'd probably throttle to under 1GHz to run AVX-512 at the power, right?

Intel TDPs are fake. Especially fake when it comes to AVX.

800mhz got confirmed

You tell me, plebbitor.

O ok. I was just making an educated guess.

>l-look this w-we finally did t-the 10nm thing
It's just to keep shareholders happy. Worthless chip, more of a proof that they can actually do it if anything.

>2.2GHz
>no iGPU
>dual core
>15W

Attached: 1504271097302.jpg (200x200, 23K)

THE most reddit image in the history of reddit

does this chip have ANY advantages at all over their current kabylake/coffee lake mobile chips? other than avx 512 because that's just fucking retarded

AMD is going to destroy Intel over the next couple years. I really don't see them recovering. 10nm delays, high heat and non-soldering, expensive cooling and motherboards and no real progress. They don't even have a mainstream 8 core/16 thread CPU yet.

I think this is doom for Intel.

AMD will have 7nm Threadripper in the future, too. Think, a 16 core/ 32 thread CPU at 4.8Ghz.

Insanity. Intel is screwed.

Man, Intel is a bad meme at this point.
This is just fucking comical.

No. It only exists to say they got a 10nm chip out this year.

>Existing

They yields must be very bad if they're starting with mobile i3s

Well, hopefully they will make a better chip later.

Attached: Intel1.png (882x758, 246K)

Mobile i3s with no iGPUs

>Finally 10nm
>Mobile cpu instead
>Dont even have iGPU
>Muh power efficiency

Attached: 1483675906912.png (274x385, 130K)

HAHAHAHA

INTEL IS FINISHED

Intel will recover in 3-4 years once Keller is done with their new architecture

>tfw keller puts the CPU industry on his back

Yawn 7nm when

Intel will go the way of cyrix if they don't get Jim Keller to unwreck their shit

>>>r/the_donald

Attached: ScreenShot20180507at12.40.21PM.png (1129x691, 990K)

It would be real damn nice if he managed to double the IPC per core.

Zen 2 APUs are going to be amazing but AMD has to strong arm at least one major OEM into making a premium thinbook or w/e the AMD offbrand ultrabooks are called. AMD needs a Dell XPS in other words.

cyrix had a potentially shit-wrecking architecture very close to release before they were shutdown so even the shitwrecker himself might not save intel

Too bad Keller can't unfuck Intel's fabs

>if were lucky even with 6 core CCX
>It's already confirmed to be 6 core CCX or more 4 core CCXs per Zeppelin die since there is a confirmed ebyn at 48 cores.

the bad timeline alternative you're overlooking is 3*4c CCXs.
perhaps even more likely given rumors of 2nd gen 7nm 16c parts...

14 14+ 14++ 14FFL 14FFL+ 14FFL++?

14++ > 22FFL
22FFL+ ~ 14++

Intel's TDP specifies base clock on ark.intel does not include turbo boost so

Keller's working on SoC shit, I don't think he's doing anything involving a new architecture

This came out of one of the presentations they did last year. It does show both 10nm for the main cores and GPU though, but it also does show they are looking at mixing fab process on a single die as well.
So if they are having problems with the GPU they could potentially just stick with 14nm for it in the end.

Attached: 448831-mix-and-match.jpg (740x415, 37K)

That's not single die. Its an mcm using Intels emib designs, which are silicon Interconnects designed to be cheaper than a full interposer.

keller is good but even him cant do godly things to fix intel fabs

Other than learning a new machine language - with plenty of time before this tech is adopted - is that so bad?

Said the jews who also certainly don't have a nuclear program
Well, he now has the full might of Israel to back him though...

Cheaper, lower latency, faster

Kek, if intel wants to go that way with their marketing shilling, AMD should add to their dies a few 5mm transistors that do absolutely nothing and call their chips 5mm

Keller can't do shit to fix their 10nm process

So how would that work. Is thread brapper going from 16 cores to 24 by going to 3 of 6 packages instead of 2 out of 4?

keller is a cpu uarch engineer not a uvb god

no, still just:
- 2 die * 2 CCX/die * 6c/CCX, or
- 2 die * 3 CCX/die * 4c/CCX

>6c/CCX
can you even make 6 cores per CCX? i think it's more reasonable to make threadripper with more CCX per chip

it might be possible, but at the same time it would be offset by the fact adding 2 cores would vastly increase the complexity of the connections in the L3 cache, as every L3 slice must have a direct connection to every other L3 slice within the CCX, plus the connection to the nearest core.

My bigger question is what AMD is going to do to help mitigate memory bandwidth concerns, as 12C/24T blasting off 1 dual-channel DDR4 controller is going to cause issues. Mating some HBM to each die could help, but it would vastly increase the complexity of the packages and would make the resultant processors much more expensive because fucking no one can properly mass-produce 3-D stacked silicon yet without jacking the price up through the roof due to the less than stellar yields.

well, you either add more ports on the CCX/L3 internal crossbar, or you add more ports on the external Infinity Fabric one.

3*4c is clearly technologically easier, but 2*6c is doable and opens the door to quite a few other nice possibilities:
- 2c-6c laptops and APUs
- more model core count differentiation (4/6/8/10/12c workstation SKUs vs. just 6/9/12)

Attached: 1492801783942.jpg (954x605, 134K)

>My bigger question is what AMD is going to do to help mitigate memory bandwidth concerns, as 12C/24T blasting off 1 dual-channel DDR4 controller is going to cause issues.

Some rumor mills are spreading tales of 4 MB L3 per core. That, combined with 6c and 8c CCXs, would go a long way to help everything but purely streaming workloads.

Attached: edb71509528783.png (594x364, 17K)

>dual core
>2018

Attached: 1525178689124.png (1056x999, 2.04M)

That's alot of fucking L3 cache.

I suppose another option would be to fatten the IF bandwidth (either by doubling the IF clock or widening the pipes), then instead of pumping transistors into L3 caches, use them for an on-die L4 cache linked to the IF speed.

I don't think ddr4 fed graphics has much more in it currently. So my guess is the apu could easily be 8core with 12-16graphics cores. It would be far more marketable against Intels 8core with igpu. The graphics will still be bandwidth starved if there's no hbm and the 8core ccx can have reduced cache because it doesn't need to connect to other ccx or it's two 4core ccx while servers/desktop/hedt go 6core ccx.

This noname french magazine is intel shill. They post fake news from this account every time amd launches something. They posted fakes about 5GHz stock ryzen and then made fun of amd at launch as if they weren't the ones spreading fake news in the first place.
The funny thing is, their twitter had like literally 0 followers before jews started forcing it at ryzen launch.

It indeed was not. Even some sandy bridge processors that don't use HD 2000 had a similar set up

Attached: clarkdale_block.jpg (494x344, 35K)

>I suppose another option would be to fatten the IF bandwidth (either by
doubling the IF clock or widening the pipes)
this would crank power up dramatically and is only really likely to happen if AMD want to encourage DDR channel interleaving for higher single-consumer burst bandwidth.

> instead of pumping transistors into L3 caches, use them for an on-die L4 cache linked to the IF speed.
this will never, ever, happen.
one of the absolute best features of the Zen platform is the low local L3 latency.
IF is designed for max bandwidth per Watt at the cost of latency, so you absolutely don't want to be servicing half your L3 cache accesses over the data fabric.

>this will never, ever, happen.
>one of the absolute best features of the Zen platform is the low local L3 latency.
>IF is designed for max bandwidth per Watt at the cost of latency, so you absolutely don't want to be servicing half your L3 cache accesses over the data fabric.
Perhaps i should clarify a bit.

Instead of going with big fat L3s and the associated penalties those incur (if memory serves me correctly, the bigger a cache is the slower it is), The L4 cache i have proposed is to help with keeping the cores fed, as a hit to the on-board L4 cache would be significantly faster than running all the way out to main memory. It would essentially serve the same purpose as the eDRAM slab on the Broadwell chips equipped with it. It could also in theory free up some congestion on the IF network as instead of having to bounce all the way to the other CCX (or to a specific CCX on another die) for a chunk of data, it can send the request to the L4 slice.

>why the fuck i need avx-512 on a mobile cpu
I was gonna say "Why fucking bother with an i3 without a GPU", but clearly this is for server or embedded work.

for a cache to be meaningful it has to both have a substantial hit rate and a markedly lower latency than the next level of storage behind it.

I am saying the any sort of SRAM cache on the far side of the IF crossbar relative to a CCX is going to have not great latency even in a faster/wider fabric.
AMD has apparently deemed the typical % of data duplicated across CCX L3s low enough that the same amount of die space spent on local L3s helps more that that spent on a hypothetical UMC buffer etc.

Isn't the whole point that doing stuff on 10nm would be far more energy efficient than on 22nm? Cheaper, sure, but porting and fabbing shit down to 14nm shouldn't be too hard.

they keep updating simd instructions because the patents on older revisions eventually expire, which would allow other manufacturers to build x86 cpus and compete with them

If your stuff isn't working as it should at 10nm then it doesnt matter if it is more energy efficient or not. They are hitting the limit of the monolithic die and splitting things up will give far better yields.

You dunce, GPUs are way easier to fab than CPU cores. The only thing easier than a GPU is a memory controller.

>Keller's working on SoC shit, I don't think he's doing anything involving a new architecture
OH NO NONONONO AHAHAHHHAH

oy

Why the fuck would anyone want a dual core with a mobile dGPU?

>PLL
>BIST
>FUSE

>PLL
>BIT
>U

>PITBULL
Wtf this is DANGEROUS.