It wasn't a fake after all

techpowerup.com/258654/cerebras-systems-wafer-scale-engine-is-a-trillion-transistor-processor-in-a-12-wafer

>it wasn't a fake after all

O O F
O
F

Attached: 6OLnf5VZYwqNTuus.jpg (335x314, 76K)

Attached: index.jpg (309x163, 8K)

That's a big chip

kill yourself, you dumb fucking retard

SEETHING

Attached: Intbecile.png (400x400, 1.22M)

But can it run Crysis?

...

for you

Honestly it's mind blowing tech. If this thing works well it's seriously disruptive.

>4(you)

is that a camera sensor

Attached: t0nR4bZqNjnqy4t3.jpg (680x468, 91K)

That seems kinda silly to me. How much performance penalty will the two furthest chips suffer when they have to communicate?

Much less than going to off chip memory.

>redundancy
never understood why mainstream manufacturers didn't do this to combat defects

None. There's massive interconnect passthroughs. No latency. The only downside is means of cooling (directly applied water jets from above, due to immense heat output)

1024 different instances of it at once in real time

So it behaves like a form of mesh interconnect, except inter-chip rather than inter-core?

>it behaves like a form of mesh interconnect
It IS a mesh interconnect. A thing the industry believed was fucking IMPOSSIBLE...up until now, that is.

>inter-chip rather than inter-core
Both.

the latency is probably just below 10ns between the two most distant points.

Is this mini itx compatible?

No, it can't. It is useless.
Prove it.

>400000 cores
>Shitblows can't handle more than 32

How much does that piece of sand cost?

>no price
>no tests
gay

over a million dollars for sure

But does it give off more heat than a space shuttle on reentry?

Attached: Romero.jpg (600x400, 152K)

wtf you can just build a gpu farm

HAHA CAN THIS RUN [VIDEO GAME] THOUGH XDDDDDDDDDDDDD

>millions of dollars for some block of sand

Attached: 1564154385776.png (258x249, 68K)

It's supposedly multiple times faster and more efficient than an equivalent investment in Nvidia cards.

it's not a block of sand though you drooling fucking moron

>reducing training times from months to minutes

If this works it will be a game changer.

>sheet
there, fixed it it

Guess you're just a pile of meat then.

It sounds fishy because they don't show any tests.

As long as the thing actually works it should perform very well. These kinds of processors are mostly a sea of SRAM and tensor ALUs on a mesh. The main improvement to make is cutting off chip communication to a minimum which is exactly what they're doing. They claim to have some kind of innovative tensor core but we'll see if it's an improvement.

>400000 cores
>99.99% of applications/games use 2 at most

0.00000001% of yield

Most likely they just don't have anything to show(yet, hopefully). The announcement was mostly to attract investors.

they work around that by;
- using a 16nm process, which is more reliable since it's older
- building redundancy into the design, so small defects only slow it down a touch

>"The Wafer Scale Engine consumes 15 kilowatts of power to operate - a prodigious amount of power for an individual chip"
i do wonder how they're cooling this nuclear hotplate, though

Kek

How is it there, in 2012?

>One yield is an entire wafer
>One wafer = one CPU
>Cerebras is 100% yield

Attached: 56ryugfty7u8.jpg (400x400, 18K)

>no latency
Fucking brainlet go google the speed of an electron in copper/gold/silver/unobtanium.

just run 200,000 games, then, duh

the speed of electricity is not the same as the speed of electrons.

Fucking idiot. Electrons are slow as shit. It's the speed of electricity which is what matters. And do you know what the speed of electricity is? It's the same as the speed of LIGHT, you dense motherfucker

Electrons move at about 23 micrometers per second in copper.

>None. There's massive interconnect passthroughs. No latency.
no communications bus has zero latency
this isn't an insignificant amount of added latency over a smaller chip, either
however, keep in mind this is intended to compete with a cluster of machines with normal sized cpus, not with a single normal sized chip
while this may seem high latency compared to a smaller chip, it's considerably lower latency than a cluster of separate machines

>certain combinations of sand cost millions of dollars

you have to go back

>you can go to jail for certain combinations of ones and zeroes

Attached: 1564506415290.jpg (974x1442, 422K)

>bus
There's no bus. It's all direct pass-through.

That's still only .32 watts per mm2 compared to say, 1w+ we see on modern CPUs, so the only "problem" is a cooling system large enough. And that has already been built.

I'd like to see these things in a cascade nitrogen type cooling tower like the old Crays.

Cute and funny trips

Attached: DlxQ87oU4AEXxi6.jpg (800x914, 93K)

>speed of light
Confirmed retard

>playing on semantics doesn't change the fundamental physical concept that is being discussed

Enlighten us since you seem to know more about how electromagnetic forces work.

>no latency

Attached: brainlet.png (452x381, 52K)

Latency doesn't matter on a massively data parallel architecture. This thing is competing with whole clusters of GPUs transferring data all the way through PCIE to the CPU and Infiniband to another node. The reason for the SRAM is to save power.

the speed of.l light varies depending on the medium

the blue glow of cherenkov radiation is protons(?) moving faster than the speed of light in the local medium and releasing energy as they slow down

Cumbrain

you need to go back

LMAO this thing will look like a windmill when you attach a tower heatsink on it.

>the industry believed to be impossible

Haha no they didnt, its only not possible to do it without creating tons of heat. High core count xeons have to have really low clocks or exotic cooling in order to not go nuclear and they already have mesh interconnects anyways.

Theres nothing groundbreaking about this chip other than its sheer size, which even then any manufacturer can do it already, but its an insanely high cost to manufacture with lots of overhead time spent qualifying and binning each chip. Not to mention the intense cooling requirements it would have even if it didnt use a mesh.

>which even then any manufacturer can do it already

I'm quite sure you need entirely different tooling.

they do, it's just that they chose to sell those defects under different names.

How many tubes of paste are we supposed to use on this anyway

Electrons bud, electrons.

>15kW
Finally a worthy competitor to Intel.

Wafer making as a process itself doesn't allow for mesh interconnection, you fucking dumbass.

>Theres nothing groundbreaking about this chip
There is, you fucking dumbass. They've literally made impossible possible with this one-wafer one-CPU approach.

See .

Attached: BIGGERTHANKEYBOARD.jpg (680x383, 48K)

>its an insanely high cost to manufacture with lots of overhead time spent qualifying and binning each chip
This is the important bit. Keeping the defect rate low enough to make something like this reliably is really fucking hard.

Writing programs that can use all that CPU power (instead of blowing it all on waiting for locks or comms!) is only slightly easier.

>Wafer making as a process itself doesn't allow for mesh interconnection, you fucking dumbass.
There's no point when you're going to split the wafer up into lots of chips. If you're not splitting things up, you can put funky interconnect between the cores.

>there's no point
They've PROVEN it can be done, you DUMB fuck.

is it Intel ? if it's not Intel I'm not buying because I'm a gamer

Attached: 1562549113755.png (1414x782, 64K)

>quad HD
I said I'm a GAMER not a fucking noob like you, 720p or bust

Attached: 1557320721274.png (1037x311, 340K)

Remember how graphene was constantly touted as part of the next big CPU process? Until we actually see this shit in a CPU, it's just a cool idea that's currently begging for R&D funds via articles

Protons are not light not do they have any light like property. Didn't school start this week? What's with the kids still