How will Intel respond to Epyc?

How will Intel respond to Epyc?

Attached: Intel 9200.png (706x1016, 889K)

Other urls found in this thread:

youtube.com/watch?v=jBxW22JLUmg
servethehome.com/intel-xeon-platinum-9200-series-lacks-mainstream-support/
twitter.com/NSFWRedditGif

With a house fire backdoor cpu

Already done as of earlier this week.
>four hundred fucking watt TDP

Attached: 1565321519608.png (728x1060, 533K)

Maybe actually make a competitive product again?
If AMD could comeback from literal bankruptcy im sure intel can.

>more powerful processor uses more power

its worse when you remember that Intel's idea of TDP doesn't really bear any relation to the power the processor can actually draw. They just pull a number out of their ass.

>waaaaaaaa intel released a processor that I would have no practical use for and only has extremely high end server applications that pretty much only corporations would have any use for and it costs more than I will make in the next 10 years and takes power consistent to what its intended deployment purpose fits perfectly well


Grow up child, that processor is not something you will likely ever touch or even get close to in its intended deployment location. And 400 Watts is NOTHING compared to what runs in the rest of the rack that server will be deployed in.

Those are the kind of places that not only will have a dedicated power station next door, but probably 2, and a set of huge generators as backup. They will also have primary and secondary industrial HAVC deployments and all kinds of insane climate control features in addition.

You stepped out of your league son, those fuckers will wreck your shit harder than life, then call a few friends and make sure you will never work in any town any one has every heard of again - In the case of fucking up in a collocation facility - I mean this figuratively and literally

>TDP: yes

It's not quite pulled out of their arse, it's supposed to be what it pulls at base clock iirc.
So as soon as anything boosts, aka you are under load, that TDP goes out the window

Is this the Jow Forums version of navy seals?

its fuckin gold isnt it?
thats full intel shill retard posting

whos got the link to the original post? of

the fuck did you just fucking say about me you little bitch? I'm trained in AVX512 and have over 300W confirmed TDP

>TDP
TDP isn't a fucking measure of power draw you absolute retard. in fact it has nothing to do with power.

uwu that's a nice chippy

>whos got the link to the original post?

Can you retards, like stop being retarded for once?

>"grandTotal power this proDuct Pulls" has nothing to do with power

ok buddy

Does this remind anyone else of Bulldozer?
I wonder what that T stands for?

grandToTal

Sweety, it's not more powerful than AMD's.

400W is quite a lot compared to AMD's offering which does the same performance in half the watts for half the price.

What's funny is that Intel has security bugs often the patches of which deliver a performance penalty of anywhere between 5-10%. So in due time your investment will be shit OR you will have to expand your server capacity to compensate for the performance hits. Terrible. The only argument they have, even though they offer less processing power per core, more power per core, less dollar value per core, and more power per core, is that 2 such systems cost less than 3 EPYC CPUS -- or something to that effect. Was too disgusted by the logic to follow. Basically the bulldozer of Intel.

>that processor is not something you will likely ever touch or even get close to
Of course I won't. I don't want to get third degrees burns

>Platinum
Should have been named Plutonium.

Intel should sell a home sauna/server hybrid.

stop trying to force that lame pasta

Or a restaurant server that doubles as a kitchen stove.

>TDP isn't a fucking measure of power draw
yes it is, it's wasted power as heat, that means that the processor will at times sustain the use (and waste as heat) at least it's tdp rating (power usage will be higher)
your secret sauce damage control is pathetic

Yes, TDP is always less than power draw. But the issue is Intel is constantly violating even its stated TDP.

OEM/contract bribes and FUD/shilling.

Actual power draw can go way over TDP. TDP is what you design for when building cooling. It's supposed to be some kind of average.

Intel's maximum advertised density is four processors per unit (with some kind of liquid cooling). That's 4*400*42 = 67 kW per rack in processor TDP alone, probably closer to 90 kW with memory, drives, PCHs and power source efficiency taken into account.
I have no idea how you can remove this much heat out of a rack without hooking a jet engine up to it, but there it is.

Data centres are fucking loud. It's the AC in the building that does most of the cooling. The fans are just to move the hot air out of the computer.

>Total
>drawn
>power
>HaS nOthIng tO do wiTh PoWeR

TDP means Thermal Design Power you raging autist

If the incoming air is 18C and the outgoing air is, say, 58C, the main AC will have to pump approximately 2 cubic meters per second of air through the rack, which isn't quite hurricane wind but not far off.

Although I think their idea is that liquid cooling radiators are to be placed outside the rack itself (but then what's the point of having density this high?)

>How will Intel respond to Epyc?
They don't have to. Vendor's control the server market. Vendor's don't flip on a dime. Vendor's like to keep their validation simple by supporting as little as they can, and AMD has a HORRIBLE track record.

>the heat generated by this electrically powered resistive heater has nothing to do with its power consumption

hopefully they collaborate with another kpop girl group.

youtube.com/watch?v=jBxW22JLUmg

Please explain to me how an Intel processor burns. Not a throttle, a fire.

Attached: fomin.jpg (2514x1160, 135K)

servethehome.com/intel-xeon-platinum-9200-series-lacks-mainstream-support/ lol.

once you exhaust thermal capacitance it's completely equivalent to power

How is that even surprising? That's your typical high end GPU draw for a decade now. Why can't CPUs with that many billion transistors have a 400W TDP?

>That's your typical high end GPU draw for a decade now.
I actually spent 5 minutes checking the TDP lists for nvidia and amd cards for the last 10 years. None of them have a TDP of 400 watts, except for one obscure card that's 2 cards glued together for use in one slot. Aptly codenamed Vesuvius.
So, no, it's not typical high end GPU TDP. Go fuck yourself.

Attached: 2523-pcb-front.jpg (1200x429, 186K)

>2 glued together high end CPUs have the same power draw as 2 glued together high end GPUs

sounds pretty logical

Retarded when the competitor does not have such a configuration, is much MUCH cheaper,less power draw, provides more performance per cor, more value per core, and less draw per core. It's fucking stupid retard.

They can't without design changes which take years to make.
Unless they were secretly working on it they simply can't.