New era of Netburst

Intel announced a 28 core bingbus (they went back from mesh) 5GHz nightmare that they intend to sell to consumers. No mention of TDP. I wonder why.

>$1500 HPTX motherboard
>compressor-refrigerated liquid cooling loop to keep temps manageable
>3 6-pin and 3 8-pin connectors
>16 phase VRMs
>actively cooled VRMs and controllers plugged into the CPU fan connector
>actually running with a hidden RX570 - displayed SLI 1080 Tis do not show up in their Cinebench report

>probably soldered

Attached: intel bingbus cooling technology.jpg (1616x1080, 491K)

Other urls found in this thread:

anandtech.com/show/12907/we-got-a-sneak-peak-on-intels-28core-all-you-need-to-know
tomshardware.com/news/amd-cpu-gpu-market-share,36592.html
communities.cisco.com/community/technology/security/ngfw-firewalls/blog/2016/03/15/cisco-firepower-4100-and-9300-series-specifications
twitter.com/NSFWRedditVideo

Attached: MSRP does not include new wiring for your home.jpg (1739x746, 349K)

I wouldn't mind having one of the new i7-8086ks

truly powerful

Attached: this is the power of cascade lake.png (1452x892, 1.28M)

It's awesome but pointless.

The fuck is a bingbus?

dunno, but it sounds lewd.
I wanna stick muh peepee in one, now.

Okay so I'm just trying to understand the power adapter config.
Is it 2x 8 pin CPU...
Then 2x 8 pin PCIE...
and another 2x 6 pin PCIE?

So that it works with a standard high end power supply, instead of requiring a dual configuration?

So it pulls
65w from 24 pin motherboard 12v
2x 144w from 8 pin cpu ??? (unsure on standard)
2x 150w from 8 pin PCIE
2x 75w from 6 pin PCIE

Is this thing actually supposed to go up to 800w just to meet the power standards?
Any other anons are free to chime in if I'm misunderstanding what we're looking at here.

Attached: 1528201359221 (1).jpg (1300x631, 153K)

Nibba I'm still trying to figure out what a bingbus is let alone what the power supply requirement is gonna be.

Well, remember the power consumption for a 5ghz 18 core 7980XE?
You're probably not wrong.

Attached: 1516146837418.jpg (1280x720, 140K)

>bingbus
Proof? There no way this isn't just a cherry picked xeon plat with its mesh, it has the same mounts, 6 memory channels and its actually a known design of chip. Although I'm not going to say this marketing stunt isnt retarded if they didn't manage sneak out a 700mm^2 monolith.

>mfw kids will buy this to play Fortnite on it
>mfw I have no f

An i9 7960X (16 core) is drawing 500W at 4.6 GHz, and I remember seeing a review at 4.7 GHz at 600W (with only a 6 pin cable). Considering they have 3 8pin and 2 6 pin one, lets say they can do an 500W per 6 pin cable and 650W for a 8 PIN.
That would still give us an 2450W power output.

nope it's still mesh, it's literally just a bugfixed Skylake-X XCC die with some minor PDK updates

That said an user figured out the heat density of this fucker at 5ghz allcore is comparable to the bottom of a deorbiting Space Shuttle and will almost trip a 15A 120V circuit breaker on its own

Attached: eJwNyMsNwyAMANBdGACDMR9nG0QQiZQEBO6pyu7tO76v-sxLbeoQGWsD2M9V-tz1kj5zq7r13q6ax7l06TdkkVyOuz6yAA2nFKMP (552x543, 378K)

CPU by itself wont, but when you throw in the rest of the system, including cooling hardware (which by itself is going to take up a pretty chunk of power), it will.

>>compressor-refrigerated
that's not how it works.

Ring bus, look it up.

Active cooled mosfets are comfy


Just got a new high paying job so I'm gonna buy that shit and slap a monoblock on it. Fuck amd poverty shit.

Attached: 18c.jpg (1186x761, 221K)

Attached: AsusDominusSpecs.jpg (1616x1080, 286K)

Legitimate question, what workload do you have where going EPYC isn't a better option? What about PCIE lanes?

shilling for intel

The memes literally write themselves at this point

Attached: 1523122254504.png (357x330, 126K)

There are already two 8 pins plugged in
there are also an additional 2 eight pins as well as two six pins

Jesus fucking christ

That is actually how it works, liquid exits the case to go into a compressor refrigerated cooler.

world peace has been reached

Attached: breaking.jpg (1280x720, 297K)

How long before I can get one in a Mac book pro :^)

Literally available now user

Attached: stove_top1.jpg (584x389, 29K)

Never. Apple won't even have the onions in them to throw an 800w CPU into an iMac and they never will.

You have to admit it's pretty impressive that Intlel engineers have managed to pack so much heat in such a small place.

Attached: can't call them intcels when they get fucked like this.png (1172x1416, 3.16M)

link the old thread please im a sad wagecuck

>32 core TR
>the presenter keeps mentioning no special cooling

Attached: all smiles.jpg (1465x1094, 172K)

It does matter for some because I can't reasonably stuff a liquid cooler in a 4U with disks and lots of GPU's. Using the Intelferno chip won't even be a choice for my current configuration because of this.

...

I'm looking at those 4 fans that I'm guessing are for VRM cooling. I haven't seen active cooling for motherboard components in like 15 years. It would be my guess that the slide I saw earlier that looked to be talking about a 32 core Threadripper from AMD is real. I can't imagine Intel would go to this level of desperation unless AMD actually has 32 core consumer parts that clock reasonably high.

I wonder what the cost for a chip and motherboard is going to be... the power delivery on those motherboards is going to have to be so beefy that they could cost more by themselves than an entire computer.

>4
You mean 6. There's a second set of VRM where the other socket was I think.

anandtech.com/show/12907/we-got-a-sneak-peak-on-intels-28core-all-you-need-to-know

Is this basically the CPU-version of pic related?

Whoops

Attached: bitchin fast 3d.jpg (800x1051, 519K)

Not a second set. Its simply a heatsink linked via heatpipe to the main VRM set because the main VRM heatsink is not enough with 4 fucking fans on it to handle the VRM heat.

You know I'm not really sure if that's any better or worse.

>VRM cooler bigger than an NH-D15
>phase change cooler for CPU

It's so ridiculous that I don't even know what to say.

>1600 Watt PSU and a refrigerator rated for 1770 Watts

Attached: 1513896011334.png (749x577, 519K)

and it was running along a rx 570 and some leds, the rest is all for the cpu

>juicy LARPing the post

>It does matter for some because I can't reasonably stuff a liquid cooler in a 4U with disks and lots of GPU's. Using the Intelferno chip won't even be a choice for my current configuration because of this.
You dont need liquid cooling. SuperMicro makes a quad blade Xeon Phi 2U box. Those chips put out up to 320 watts TDP each.

It's supposed to be an 800w chip. I'd have to build a wind tunnel into the case.

4x Xeon Phis use more. Also this wont be a 800 watt chip.

>inb4 OPs retarded picture
its a retarded overclocking board. Any board which supports Phis, which i wouldnt be surprised if that Asus board does, similar power connectors (which are actually connected) pic related

Attached: K1SPE_spec_230x184.jpg (230x184, 26K)

intel could've spent all these years researching new stuff to always stay at the top and bring real technological advancements to the world, but instead they chose to sit on their thumbs and let AMD catch up, now all they can do are shitty PR stunts to try to look better

>new stuff to always stay at the top
You mean like have 8 socket systems, 72 core chips, and on package 100gbps low latency network cards?

>let AMD catch up
AMD doesnt do anything high end at all

>retarded overclocking board
To be fair, this is a retarded overclocking CPU we're talking about. I'm not convinced it's going into a regular server board.

>being this blind
well, you'll wake up when we start burying your coffin

>28 core chip
>5ghz
>being able to be overclocked at all

>we
user, how much of a delusional fanboi are you that you believe that you are even remotely connected to anything AMD does?

>mfw not showing the real pics

Attached: house_fire.jpg (1280x960, 514K)

>paper launches [circumcised peepee noises]

Did you not see the "@ 2.7GHz" base clock listed in the Cinebench image up there

>paper launches
lol ok

Attached: Screen Shot 2018-06-06 at 12.20.25 AM.png (1406x1326, 1.44M)

Hyberpipelines are back?

nope

What the fuck is actual EPYC that has far more performance @32 Cores and a fuckton more PCIE lanes? 16dimm PER CPU as opposed to 12 shown here..

INTEL IS DEAD IN THE SERVER MARKET AND IN THE HOME

>heat density of this fucker at 5ghz allcore is comparable to the bottom of a deorbiting Space Shuttle
fucking kek

Attached: 1489525502712.jpg (300x360, 25K)

>muh niche 15 global sales products BTFO'd by Epyc [exacerbated circumcised peepee noises]

>72 core chips
With shit scalability that won't be able to compete with cheaper but similar ebin cpus

So what you're saying is that amd cant even compete

AMD has at most 1% of data center market share

>so what you're saying is that amd cant even compete
wha-what? Wonderful reading comprehension, bravo.

>AMD has at most 1% of data center market share
[citation needed], and even if this is true it doesn't have anything to do with EPYC blowing Xeon shitters out of the water.

>[citation needed]
tomshardware.com/news/amd-cpu-gpu-market-share,36592.html
>AMD gained 1/2 a point of server market share during 2017 (to 1%),
AMD's market share is basically rounding error of Intel's. ARM chips have more of the data center market than AMD. Anyways user stay ass blasted that you're this personally invested in something you'll never own or work with.

good work Intelavivjesh, didn't even finish reading the sentence

>blowing xeons out
>1% market share

If it wasn't clear to you, I'm referring to performance, not sales.

Pride before the fall. I happen to work in the industry and there is a pronounced and aggressive push towards EPYC. You have many new blade and unified computing series moving towards EPYC. I just got done talking with a couple of other idiots in the industry and its always hilarious to encounter sheer arrogance at the peak before shit all comes crashing down. This is why the history of computing is the way it is. People see things changing from underneath them yet continue to hold steady and tow the line. In tech, that can cost you your company and future. So, as a competitor one must laugh when you spell out to your competition that they're fucking up and they still keep on doing it over and over. Market share isn't a static figure. New HW sits in labs for about a year to three before it gets certified in most data centers. Once the certs are provided, the warranty, and service contract shit gets fork lifted the fuck out and replaced. Intel is a special kind of fucked. If they don't nor you believe so, the more better for AMD and its channel partners.

>I'm referring to performance, not sales.
pic related

>aggressive push
>0.5% increase in market share

Attached: Intel-Skylake-SP-to-AMD-EPYC-GROMACS-STH-Small-Case.jpg (863x588, 72K)

>New HW sits in labs for about a year to three before it gets certified in most data centers
also this is how i know you've never set foot inside a data center in your life

>posting AVX-512
this is blatant shilling, I'm disappointed I didn't recognize it sooner

>you're not allowed to use 512 bit vector units
>you can only use 256 vector units because AMD doesnt have a 512 bit vector unit
>b-b-but eypc btfos intel

Attached: Screen Shot 2018-06-06 at 1.16.33 AM.png (1398x640, 780K)

Ringbus. It's the interconnect on the CPU between the cores. AMD uses Infinity Fabric, which scales much better.

The power transferrable through the plug is specified by the ATX standard in a way to avoid melting shoddy connectors. That's what they will consider when designing these boards.

>which scales much better.
>which is why their chips dont scale beyond 2 sockets

Sorry, but those have lower heat density.

>aggressive push
>0.5% increase in market share
Re-read :
> New HW sits in labs for about a year to three before it gets certified in most data centers. Once the certs are provided, the warranty, and service contract shit gets fork lifted the fuck out and replaced
Aggressive pushes occur behind closed doors .. Numbers lag for some time derplet

>Re-read :
New hardware doesnt sit in a test environment years before being moved in to production. Youve never stepped foot inside a data center in your life.

Attached: The temps are too damn high.jpg (2448x2448, 1.25M)

I work in a data center and we've got idiots from direct line, vadata, mc dean etc that will all need to be retained if amd ever penetrates hpc market.

It won't because status quo is the name of the game and nobody dares to stray from it. It's an extremely competitive sector and you've thousands of people waiting in line to take your 150k year job. You don't fuck around and toss out existing/future build plans just to switch to amd just because they are nearly catching up now.

> This is how I know you work in IT and aren't familiar with hardware validation for data center deployments and are just a small fry end user who buys it after the fact or during and fucks your company's shit up.

>I'm a rack monkey
>I make 150k a year
Anyone who makes 150k a year doesnt actually work inside a data center. And you've clearly never stepped foot in one either.

stay mad user

data center jobs are super easy to get, idk why you put it on a pedestal


must suck being a retard

AVX processing is a fucking meme only except for a slim number of uses cases.

And wtf is this sorry ass intel infographic? Who the fuck does Layer 3 routing on fucking server compute hardware? This shit is ran on proper network hardware in a data center and in a number of big time data centers, many L3 services run encap'd on a new class of L2 Protocols that can run on chink whitebox hardware. Furthermore, what special class of kike made this diagram when a single die is more than capable of handling one of the fucking 2x25 nics and another capable of handling the other. ASK ME HOW I KNOW? Because you can run this shit on TR and beat these numbers.
What kind of ghetto ass data center does L3 forwarding on server hardware?

>parallel processing has a slim number of use cases
>like encryption, video encoding, or basically anything cpu intensive which can be parallelized and benefits from high cpu core counts

>Who the fuck does Layer 3 routing on fucking server compute hardware
Cisco, Palo Alto, Juniper

communities.cisco.com/community/technology/security/ngfw-firewalls/blog/2016/03/15/cisco-firepower-4100-and-9300-series-specifications

>Furthermore, what special class of kike made this diagram when a single die is more than capable of handling one of the fucking 2x25 nics and another capable of handling the other.
You dont understand what 100GbE is do you? It is four 25G channels bonded together, just as 40GbE is four 10G channels bonded together.

>ASK ME HOW I KNOW?
You clearly dont know how 40/100GbE works.

>What kind of ghetto ass data center does L3 forwarding on server hardware?
Any data center with a firewall.

The two sockets both accommodate 4 dies. It is an 8-chip platform. They meant that it scales better than ringbus on same core count architectures. Ofc only when multitasking. Intel has meshbus for this purpose, seen in skylake-x models. They are worse for gaming than the older ringbus.

Hardware validation doesn't occur by end users or IT specialist you low grade pleb. It occurs by the companies that integrate the CPU into enterprise solutions that people far down the food chain like you purchase and slide into the racks... If that's even at your paygrade level.

The hardware sits in their labs for years at OEMs and channel partners and only after you have enterprise solutions do you get it for which any IT group worth a dam then spends some quarters validating and filing bug reports to get firmware fixes to ensure the whole operation doesn't come to a halt due to a bug. Only then do big purchases occur.

This is confirmed by Epyc having been in data centers and validation environments with exclusive partners for years prior to plebs like you finding out about it.

> Youve never stepped foot inside a data center in your life.
Stop talking above your pay grade. I far higher up the food chain.

>two sockets
> gaming
are you pretending to be retarded? or did video games suddenly become numa aware

they sort of have to, a single Skylake-X mesh chip runs better with multiple NUMA groups

2p is a natural progression for enthusiasts

>The hardware sits in their labs for years at OEMs
So what you're saying is that no one is shipping appliances with Scalable Xeons. Right… pic related

Attached: Screen Shot 2018-06-06 at 1.48.46 AM.png (3026x1470, 800K)

you dont understand what NUMA is or why it matters do you?

Attached: Screen Shot 2018-06-06 at 1.50.01 AM.png (1414x650, 949K)

> I work in a data center
I work at one of the largest data center hardware providers with a market cap above 200Billion.
> 150k year job
Try again.
Also, my work is present in just about every data center in the world.
Why be mad? I know where you're at on the ladder.
I phone your types up at my company's internal dev labs when I need my chassis hard reset. We all have our jobs and roles user.

Attached: 1525270434026.jpg (502x493, 100K)

I thought I was clear. For gaming, ringbus is better. For heavy multitasking, meshbus scales better. I was clarifying the meaning of "scales better". You might have misunderstood. The post was prtty hasty.

>For gaming, ringbus is better
Not if the games start using 8 cores, they're already starting to use 6. Anything above 6 and ringbus goes to shit. Quad cores are obsolete.

>I work at one of the largest data center hardware providers with a market cap above 200Billion.
>I phone your types up at my company's internal dev labs when I need my chassis hard reset.
I'm sure you do, which is why they dont have BMCs or managed PDUs allowing anyone to do this remotely.

ffs i have a pair of managed pdus at home, and you're larping that you dont have them at work

Only at around 10 cores does ringbus become deprecated afaik, so there is still some time left.

>parallel processing has a slim number of use cases
And you're welcome to provide the slim number of use cases for AVX CPU based processing
>Who the fuck does Layer 3 routing on fucking server compute hardware
So, small appliance install number firewalls....
Yeah these really sell like hotcakes. Intel is a game changer, its taking over the firewall business
> You dont understand what 100GbE is do you? It is four 25G channels bonded together, just as 40GbE is four 10G channels bonded together.
I understand exactly what it is at the switch and I understand per your own diagram :
What a fucking breakout cable is that reduces it down to 2x25GB per nic occupying two PCIE slots which can be intelligently slotted (not like the niggers at intel have it depicted) so that each nic hits a set die. As such, you have 2x25GB feeding one die which is more than capable of it. No kikery involved. I know this because I can run this configuration on a Threadripper with no such issues.
> You clearly dont know how 40/100GbE works.
I clearly do and you need to cut the larp out
> Any data center with a firewall.
Aka small box appliances that aren't game changers.
Aka the kind of hardware that gets custom made by people like me based on a CPU churning through a lab for years and custom enterprise hw solutions evolving around it.

We done measuring dicks faggot?
> MINE's BIGGER THAN YOURS

Attached: ive_seen_things.jpg (1280x720, 35K)

>desperate Intel shill trying to defend CCL-SP when even Intel admitted EPYC will hit them H2 2018
But how?

What I'm saying pleb is that I work higher up the foodchain on developing the hardware/software/firmware that you just pictured and what I am speaking about is from that level not on much further down the food chain where you slide shit into a rack at a data center. The Validation I spoke of happens for years at a level far above your pay grade so that all you have to do is slide it into a rack and check temperatures when sensor alarms go off. Or call up tier one support for when something more seriously goes wrong (if you have that kind of contract) and wait until the call reaches my desk (an actual engineer).

Put your fucking dick away.