50,000 USD

>50,000 USD
400 WATTS
>400 WATTS
50,000 USD
>50,000 USD
400 WATTS
>400 WATTS
50,000 USD
>50,000 USD
400 WATTS
>400 WATTS
50,000 USD
>50,000 USD
400 WATTS
>400 WATTS
50,000 USD>50,000 USD
400 WATTS
>400 WATTS
50,000 USD
>50,000 USD
400 WATTS
>400 WATTS
50,000 USD>50,000 USD
400 WATTS
>400 WATTS
50,000 USD
>50,000 USD
400 WATTS
>400 WATTS
50,000 USD

Attached: nuclear reactor.jpg (442x293, 14K)

Other urls found in this thread:

anandtech.com/show/14182/hands-on-with-the-56core-xeon-platinum-9200-cpu-intels-biggest-cpu-package-ever
twitter.com/NSFWRedditVideo

The power of the sun... In the palm of my hand

It's actually very impressive. I imagine it would reach temperatures higher than most stellarator fusion reactors. In fact, Intel should lead the efforts for fusion reactors since they can achieve such high temperatures. Intel is king.

the heat density of the chip inside is probably higher than a fusion reactor's

>can't make them faster
>just make them larger

In another 30 years computers will be the size of entire rooms again.

Attached: 1555264836458.png (900x900, 708K)

>50,000 USD
>400 WATTS per glued die
ftfy

>400 watts

Attached: 1528292848450.jpg (811x1024, 223K)

>400 watts
so 1150 intel watts ?

Source for that price?

Can it run Crysis?

>400w
The new molten salt reactors from intel look great

Attached: 1540529728523m.jpg (768x1024, 88K)

Remember28-core demo that supposedly hit 5ghz on a chiller?
There was a thread about it, and some /sci/fag did the math, and said it's heat per square mm was roughly the same as an orbital re-entry vehicle.

It's not yet the end of the world,
but you can see it from here.

How do you not know this?

Attached: 1528235617198.png (1452x892, 1.28M)

MOAR TDP

Attached: 1565335339961.jpg (608x369, 106K)

sweet haruhi

Attached: initial d 3.jpg (1280x720, 83K)

my god what a hip. but why this punchable face

> Cope Lake

I Died

Get REKT AMD fags

Attached: kek.png (2420x1308, 3.6M)

source my dude

For the price of the one on the left you could probably buy 3 of those on the right. Pretty sure any concerns over the

Intel knows they aren't selling these things to GAYMUR retards. They know they've already lost. They're just trying to save face.

>you could probably buy 3 of those on the right
what are you, poor?

jokes aside, those Xeons aren't even socketed, and I say you could buy at least 4 64-core Romes at the price of a single dual 9282 system.

anandtech.com/show/14182/hands-on-with-the-56core-xeon-platinum-9200-cpu-intels-biggest-cpu-package-ever

Anandtech delivering the bants.

Attached: cost your limbs.png (882x317, 40K)

Who here from r/AMD?

Does that even count as a CPU? I mean, it's two chips with completely separate sets of I/O lines, so it's not even "glued together dies" like Core 2 Quad.

It's sodered to the motherboard, though.

based and octopilled

That CPU is just a paper launch to save face, Intel won't even sell those, they will probably just brib- I mean give them as gifts to select companies in a pathetic attempt to damage control.

They'll sell them to supercomputer vendors, so basically, no one will ever be able to buy them.

Delete this post now. Jow Forums doesn't know of a world outside of personal gaming computers where r/AMD has claimed VICTORY.

Anandtech is paid intel shill.

Attached: oh_shit.jpg (281x281, 12K)

It's debatable if it's going to be successful in that market given that only Intel's offering motherboards.

Hot garbage

this is bad... right?

ITT: coollets mad af

go back homo

If they don't have any customers, yes, but they probably aren't that retarded. Probably.

amd's 64c 250W epyc is like $7000

I don't think supercomputer vendors would be interested, since supercomputers consist of a massive number of nodes and the performance per a single CPU package isn't as important as best performance per volume, watt and dollar.
More likely is that they'll sell them to large server vendors to make a few halo models that will be used more for promotion than for real sales.

7 grand for the dual socket compatible ones, the 1P 64-core Rome is 4 grand

>waaaaaaaa intel released a processor that I would have no practical use for and only has extremely high end server applications that pretty much only corporations would have any use for and it costs more than I will make in the next 10 years and takes power consistent to what its intended deployment purpose fits perfectly well


Grow up child, that processor is not something you will likely ever touch or even get close to in its intended deployment location. And 400 Watts is NOTHING compared to what runs in the rest of the rack that server will be deployed in.

Those are the kind of places that not only will have a dedicated power station next door, but probably 2, and a set of huge generators as backup. They will also have primary and secondary industrial HAVC deployments and all kinds of insane climate control features in addition.

You stepped out of your league son, those fuckers will wreck your shit harder than life, then call a few friends and make sure you will never work in any town any one has every heard of again - In the case of fucking up in a collocation facility - I mean this figuratively and literally

Get a load of this retard trying to tardwrangle us into thinking this spectral-silicion abomination is even worth a second sniff when he can't even convince his own boss to invest shekels into buying it.

I don't hear a counter argument here, faggot.

>400 Watts is NOTHING compared to what runs in the rest of the rack that server will be deployed in
HAHAHAHAHAHAHA, you clearly don't know what you're talking about.

>>waaaaaaaa intel released a processor that I would have no practical use for
Everyone who connects to the internet has practical use of decent server chips.

Has the mighty Jayhawk finally come home to roost?

I smell some fresh pasta.

you obviously have never built a server or even seen one in your life.

Does 150-200 watt for a server grade processor seem more reasonable? there are systems that take 4 of them, and run 2-3 1500-2000 watt PSU (sometimes more). Considering that some of these will mount 24 10k+ rpm disks, multiple fan trays, and in some cases quite a few GPUs - 400 watts is pretty much navel lint.

Fun fact. The 64 core Rome processor goes for about $7200 dollars. That means you can get 6 64 core Rome processors for the same price as 1 xenon

It's not even 64 cores. Intel wtf are you doing. How is this possible with a 10 billion r&d budget

Attached: Screenshot_20180628-234706~3.png (571x571, 332K)

MOAR CORES YASH r/AMD WE DID IT

Their yields are still shit for the 28 core monstrosities.

You know, you could stop being so negative about that and put that heat to work. Say, use it to heat water for a steam generator to power the machine the processor runs on

Breeding sow

>400 Watts is NOTHING compared to what runs in the rest of the rack that server will be deployed in

90% of the time the rest of the rack will be filled with the same kind of servers.

>BGA
INTEL THAT'S THE WRONG SIDE FOR SOLDER GOD DAMN IT

Rome has more cores, is 1/6th the price, has less security flaws, and lower tdp. Come on. This is doa and is an absolute embarrassment for Intel.

What is the cpu speeds?

Its the thing that Intel lacks.

I thought slow cookers belonged in /ck/.

Probably could use it as a slow cooker.

In dual configuration it would be a pretty fast cooker

Is Cascade-Lake the new Bulldozer?

Their main design team is in Haifa-Israel i thought Jews were supposed to be smart! What is going on?

>HEAT DOES NOT MATTER

All of those apart from the gpus are a small fraction of a 200w cpu, let alone a 400w cpu.
Then there's that the target deployment for the 9200 series is massive cpu compute density, think 8 "sockets" in a 2u space that epyc7002 can already do for 512cores at half the power draw and pcie4 to improve the limited IO.
Those 2u4n servers typically can't even fit a single gpu.

400W per CPU is substantial, it adds up. And data centers have HVAC, but their operators would be happy to save a bit on their A/C bill. This actually is a stupidly wasteful and cost-ineffective solution even in its intended use cases. Some customers could afford it but even they would prefer not to burn their money for no reason, when there are objectively better solutions out there.

The 9900KS was Intel's long-awaited response to the FX-9590, so sure, this can be Intel's Opteron 6287 SE.

It's the anglo management fucking it up

>$50 000 for 400w
Smart move, Intel. Now, no one will be able to make fun of them saying "more heat per dollar"

>You stepped out of your league son, those fuckers will wreck your shit harder than life, then call a few friends and make sure you will never work in any town any one has every heard of again - In the case of fucking up in a collocation facility - I mean this figuratively and literally

fucking cringed

Moreover, 400W TDP means that they'll have to use exotic custom coolers, since no existing server cooling solutions can handle this much heat, and it's extremely difficult to tackle >250 watts at a single point with air cooling in general. This will make those servers even more absurdly expensive.

Just buy a chiller bro

All these poor people coping

Can't wait to run my home server on this

Attached: 1559198086383.jpg (1201x855, 152K)

Just hook it up to the break room fridge

>Watts (Higher is better)

^^^^^
This

These are nothing more than PR/stunt pieces to appease the concerns of investors.

It is no different then some major auto manufacturer making a limited number of "street legal" race cars for sell that aren't practical outside of motorsporting.

Sorry, HPC/Datacenter avoid these chips like the plague. They are terrible at power efficiency and lose in performance node density due to the extra cooling that is needed to keep it tame.

There's a reason why supercomputers aren't operating 300W+ chips. They tend to stick to ~100-150W land.

Attached: 1554100280364.jpg (614x586, 92K)

and if you count the power usage it makes even more sense

They are probably trying to show that they can achieve the same performance of AMDs equivalent with less cores, showing that their architecture is "superior". Too bad the price doesn't scale well.

imagine if amd released a higher clocked 300-350W TDP 64 core that destroys all the new intel cpus just to say fuck you to them

>h-hey guys we'll finally manage to beat a processor from 2018 in 2020

Attached: i hate fags.jpg (640x716, 85K)

god intel is so fucking pathetic

We're reaching levels of heat density that shouldn't even be possible. Surely physics will break down at some point.

Attached: 355FF496-8055-4EA7-A6AA-924A9E9EDEBC.jpg (1757x1129, 177K)

Are intel actually trying to create a black hole?

> Too bad the price doesn't scale well.
nobody - NOBODY - worth anything pays list price. The nature of business on this scale (read: targets for mass deployments of xeons and epyc) pays anywhere close to list price. List price is designed entirely to filter out scrubs. If you call up your Intel or AMD rep and say you want 100 (or whatever) chips of xyz class they will clash list price by huge margins to get business.

List price is solely for uneducated chumps whom could never afford it to shitpost over.

Yeah, but even at an 80% discount it's still barely competitive. Intel's manufacturing costs actually are higher in real-world terms, so whatever the street price is, it's either uncompetitive or a money-loser for Intel. AMD will probably be willing to negotiate too.

Straight discount does not tell the whole picture - technical support is a massive element of TCO. Make no mistake outside of the biggest of bois intel is non-competitive but list price is an irrelevant metric.

In my industry (non Jow Forums related) what goes for 105 bongs is sold to me at 32 bongs because we buy enough for that to still be profitable for the supplier. 90%+ discount is not rare if you shift enough volume.

>400 Watts is NOTHING compared to what runs in the rest of the rack that server will be deployed in.
You're fucking delusional. A rack will contain tons of processors and you'd need a damn good proposal for the use case before the bean counters would even consider allowing you to waste that kind of power. Data centers go so far as to carefully choose one model of hard drive over another with similar specs because it consumes one less watt of power. They this with everything they buy. They want price/performance and they do the long calculation. Purchase price, support, power consumption, life cycle, consumables, everything. Businesses can't do stupid shit like partake in brand loyalty because that can render them uncompetitive and destroy them.
The only way Intel is getting these chips into new racks over AMD's offering is through exploiting existing contracts or by sucking a lot of company dick.

my jewdar is going off even though she doesn't really look it so i'm gonna go with 'she's jewish'

>400 WATTS
Unironically what the fuck are they thinking?
it's gonna hit 500+ under load

this is the thing though, the bean counters are just as easily beholden to the jews at intel who have signed them all on supply contracts. the same logic goes the other way, what makes you think that the entire enterprise hardware market isn't run by contractual obligations and even capable of that level of hardware flexibility

The list price is still an effective basis for comparison, since both companies will do the same thing.
Lock-in is something suppliers work very hard to maintain but if the benefit is great enough companies won't hesitate break contracts and spend some of the difference on lawyers to fight it out.

Attached: 1547261830774.png (882x317, 59K)

this is pretty fucking funny but 5w/mm2 of thermal flux contained within a few square inches is a completely different ball game from 5w/mm2 of thermal flux over the 100 square meters of the underside of a re-entry vehicle in space.

i still dont know how they're going to cool a 400w cpu