Intel announced a 28 core bingbus (they went back from mesh) 5GHz nightmare that they intend to sell to consumers. No mention of TDP. I wonder why.
>$1500 HPTX motherboard >compressor-refrigerated liquid cooling loop to keep temps manageable >3 6-pin and 3 8-pin connectors >16 phase VRMs >actively cooled VRMs and controllers plugged into the CPU fan connector >actually running with a hidden RX570 - displayed SLI 1080 Tis do not show up in their Cinebench report
dunno, but it sounds lewd. I wanna stick muh peepee in one, now.
Ian Turner
Okay so I'm just trying to understand the power adapter config. Is it 2x 8 pin CPU... Then 2x 8 pin PCIE... and another 2x 6 pin PCIE?
So that it works with a standard high end power supply, instead of requiring a dual configuration?
So it pulls 65w from 24 pin motherboard 12v 2x 144w from 8 pin cpu ??? (unsure on standard) 2x 150w from 8 pin PCIE 2x 75w from 6 pin PCIE
Is this thing actually supposed to go up to 800w just to meet the power standards? Any other anons are free to chime in if I'm misunderstanding what we're looking at here.
>bingbus Proof? There no way this isn't just a cherry picked xeon plat with its mesh, it has the same mounts, 6 memory channels and its actually a known design of chip. Although I'm not going to say this marketing stunt isnt retarded if they didn't manage sneak out a 700mm^2 monolith.
John Davis
>mfw kids will buy this to play Fortnite on it >mfw I have no f
Adrian Anderson
An i9 7960X (16 core) is drawing 500W at 4.6 GHz, and I remember seeing a review at 4.7 GHz at 600W (with only a 6 pin cable). Considering they have 3 8pin and 2 6 pin one, lets say they can do an 500W per 6 pin cable and 650W for a 8 PIN. That would still give us an 2450W power output.
Luke Hughes
nope it's still mesh, it's literally just a bugfixed Skylake-X XCC die with some minor PDK updates
That said an user figured out the heat density of this fucker at 5ghz allcore is comparable to the bottom of a deorbiting Space Shuttle and will almost trip a 15A 120V circuit breaker on its own
CPU by itself wont, but when you throw in the rest of the system, including cooling hardware (which by itself is going to take up a pretty chunk of power), it will.
Lincoln Richardson
>>compressor-refrigerated that's not how it works.
Colton Martin
Ring bus, look it up.
Easton Howard
Active cooled mosfets are comfy
Just got a new high paying job so I'm gonna buy that shit and slap a monoblock on it. Fuck amd poverty shit.
It does matter for some because I can't reasonably stuff a liquid cooler in a 4U with disks and lots of GPU's. Using the Intelferno chip won't even be a choice for my current configuration because of this.
Asher Sanchez
...
Xavier Long
I'm looking at those 4 fans that I'm guessing are for VRM cooling. I haven't seen active cooling for motherboard components in like 15 years. It would be my guess that the slide I saw earlier that looked to be talking about a 32 core Threadripper from AMD is real. I can't imagine Intel would go to this level of desperation unless AMD actually has 32 core consumer parts that clock reasonably high.
I wonder what the cost for a chip and motherboard is going to be... the power delivery on those motherboards is going to have to be so beefy that they could cost more by themselves than an entire computer.
Kayden Diaz
>4 You mean 6. There's a second set of VRM where the other socket was I think.
Not a second set. Its simply a heatsink linked via heatpipe to the main VRM set because the main VRM heatsink is not enough with 4 fucking fans on it to handle the VRM heat.
Caleb Reyes
You know I'm not really sure if that's any better or worse.
Ethan Lopez
>VRM cooler bigger than an NH-D15 >phase change cooler for CPU
It's so ridiculous that I don't even know what to say.
Charles Bell
>1600 Watt PSU and a refrigerator rated for 1770 Watts
and it was running along a rx 570 and some leds, the rest is all for the cpu
Ayden Diaz
>juicy LARPing the post
Jaxon Torres
>It does matter for some because I can't reasonably stuff a liquid cooler in a 4U with disks and lots of GPU's. Using the Intelferno chip won't even be a choice for my current configuration because of this. You dont need liquid cooling. SuperMicro makes a quad blade Xeon Phi 2U box. Those chips put out up to 320 watts TDP each.
Juan Jackson
It's supposed to be an 800w chip. I'd have to build a wind tunnel into the case.
Mason Gray
4x Xeon Phis use more. Also this wont be a 800 watt chip.
>inb4 OPs retarded picture its a retarded overclocking board. Any board which supports Phis, which i wouldnt be surprised if that Asus board does, similar power connectors (which are actually connected) pic related
intel could've spent all these years researching new stuff to always stay at the top and bring real technological advancements to the world, but instead they chose to sit on their thumbs and let AMD catch up, now all they can do are shitty PR stunts to try to look better
Parker Nelson
>new stuff to always stay at the top You mean like have 8 socket systems, 72 core chips, and on package 100gbps low latency network cards?
>let AMD catch up AMD doesnt do anything high end at all
Ryan Miller
>retarded overclocking board To be fair, this is a retarded overclocking CPU we're talking about. I'm not convinced it's going into a regular server board.
Zachary Wood
>being this blind well, you'll wake up when we start burying your coffin
Jacob Sullivan
>28 core chip >5ghz >being able to be overclocked at all
>we user, how much of a delusional fanboi are you that you believe that you are even remotely connected to anything AMD does?
>muh niche 15 global sales products BTFO'd by Epyc [exacerbated circumcised peepee noises]
Christian Bell
>72 core chips With shit scalability that won't be able to compete with cheaper but similar ebin cpus
Michael Roberts
So what you're saying is that amd cant even compete
AMD has at most 1% of data center market share
Camden Ward
>so what you're saying is that amd cant even compete wha-what? Wonderful reading comprehension, bravo.
>AMD has at most 1% of data center market share [citation needed], and even if this is true it doesn't have anything to do with EPYC blowing Xeon shitters out of the water.
Dominic Gonzalez
>[citation needed] tomshardware.com/news/amd-cpu-gpu-market-share,36592.html >AMD gained 1/2 a point of server market share during 2017 (to 1%), AMD's market share is basically rounding error of Intel's. ARM chips have more of the data center market than AMD. Anyways user stay ass blasted that you're this personally invested in something you'll never own or work with.
Daniel Foster
good work Intelavivjesh, didn't even finish reading the sentence
Nathaniel Ortiz
>blowing xeons out >1% market share
Brandon Gutierrez
If it wasn't clear to you, I'm referring to performance, not sales.
David Johnson
Pride before the fall. I happen to work in the industry and there is a pronounced and aggressive push towards EPYC. You have many new blade and unified computing series moving towards EPYC. I just got done talking with a couple of other idiots in the industry and its always hilarious to encounter sheer arrogance at the peak before shit all comes crashing down. This is why the history of computing is the way it is. People see things changing from underneath them yet continue to hold steady and tow the line. In tech, that can cost you your company and future. So, as a competitor one must laugh when you spell out to your competition that they're fucking up and they still keep on doing it over and over. Market share isn't a static figure. New HW sits in labs for about a year to three before it gets certified in most data centers. Once the certs are provided, the warranty, and service contract shit gets fork lifted the fuck out and replaced. Intel is a special kind of fucked. If they don't nor you believe so, the more better for AMD and its channel partners.
Elijah Martinez
>I'm referring to performance, not sales. pic related
>New HW sits in labs for about a year to three before it gets certified in most data centers also this is how i know you've never set foot inside a data center in your life
Brayden Gutierrez
>posting AVX-512 this is blatant shilling, I'm disappointed I didn't recognize it sooner
Isaiah Bennett
>you're not allowed to use 512 bit vector units >you can only use 256 vector units because AMD doesnt have a 512 bit vector unit >b-b-but eypc btfos intel
Ringbus. It's the interconnect on the CPU between the cores. AMD uses Infinity Fabric, which scales much better.
Jonathan Watson
The power transferrable through the plug is specified by the ATX standard in a way to avoid melting shoddy connectors. That's what they will consider when designing these boards.
Christopher Johnson
>which scales much better. >which is why their chips dont scale beyond 2 sockets
Jackson Thomas
Sorry, but those have lower heat density.
Bentley Jackson
>aggressive push >0.5% increase in market share Re-read : > New HW sits in labs for about a year to three before it gets certified in most data centers. Once the certs are provided, the warranty, and service contract shit gets fork lifted the fuck out and replaced Aggressive pushes occur behind closed doors .. Numbers lag for some time derplet
Mason Hall
>Re-read : New hardware doesnt sit in a test environment years before being moved in to production. Youve never stepped foot inside a data center in your life.
I work in a data center and we've got idiots from direct line, vadata, mc dean etc that will all need to be retained if amd ever penetrates hpc market.
It won't because status quo is the name of the game and nobody dares to stray from it. It's an extremely competitive sector and you've thousands of people waiting in line to take your 150k year job. You don't fuck around and toss out existing/future build plans just to switch to amd just because they are nearly catching up now.
Benjamin Sullivan
> This is how I know you work in IT and aren't familiar with hardware validation for data center deployments and are just a small fry end user who buys it after the fact or during and fucks your company's shit up.
Jeremiah King
>I'm a rack monkey >I make 150k a year Anyone who makes 150k a year doesnt actually work inside a data center. And you've clearly never stepped foot in one either.
stay mad user
Lucas Foster
data center jobs are super easy to get, idk why you put it on a pedestal
must suck being a retard
Grayson Gutierrez
AVX processing is a fucking meme only except for a slim number of uses cases.
And wtf is this sorry ass intel infographic? Who the fuck does Layer 3 routing on fucking server compute hardware? This shit is ran on proper network hardware in a data center and in a number of big time data centers, many L3 services run encap'd on a new class of L2 Protocols that can run on chink whitebox hardware. Furthermore, what special class of kike made this diagram when a single die is more than capable of handling one of the fucking 2x25 nics and another capable of handling the other. ASK ME HOW I KNOW? Because you can run this shit on TR and beat these numbers. What kind of ghetto ass data center does L3 forwarding on server hardware?
Jacob Ortiz
>parallel processing has a slim number of use cases >like encryption, video encoding, or basically anything cpu intensive which can be parallelized and benefits from high cpu core counts
>Who the fuck does Layer 3 routing on fucking server compute hardware Cisco, Palo Alto, Juniper
>Furthermore, what special class of kike made this diagram when a single die is more than capable of handling one of the fucking 2x25 nics and another capable of handling the other. You dont understand what 100GbE is do you? It is four 25G channels bonded together, just as 40GbE is four 10G channels bonded together.
>ASK ME HOW I KNOW? You clearly dont know how 40/100GbE works.
>What kind of ghetto ass data center does L3 forwarding on server hardware? Any data center with a firewall.
Mason White
The two sockets both accommodate 4 dies. It is an 8-chip platform. They meant that it scales better than ringbus on same core count architectures. Ofc only when multitasking. Intel has meshbus for this purpose, seen in skylake-x models. They are worse for gaming than the older ringbus.
Gabriel Morris
Hardware validation doesn't occur by end users or IT specialist you low grade pleb. It occurs by the companies that integrate the CPU into enterprise solutions that people far down the food chain like you purchase and slide into the racks... If that's even at your paygrade level.
The hardware sits in their labs for years at OEMs and channel partners and only after you have enterprise solutions do you get it for which any IT group worth a dam then spends some quarters validating and filing bug reports to get firmware fixes to ensure the whole operation doesn't come to a halt due to a bug. Only then do big purchases occur.
This is confirmed by Epyc having been in data centers and validation environments with exclusive partners for years prior to plebs like you finding out about it.
> Youve never stepped foot inside a data center in your life. Stop talking above your pay grade. I far higher up the food chain.
Ian Martin
>two sockets > gaming are you pretending to be retarded? or did video games suddenly become numa aware
Kevin Morris
they sort of have to, a single Skylake-X mesh chip runs better with multiple NUMA groups
Nolan Edwards
2p is a natural progression for enthusiasts
Aiden Cox
>The hardware sits in their labs for years at OEMs So what you're saying is that no one is shipping appliances with Scalable Xeons. Right… pic related
> I work in a data center I work at one of the largest data center hardware providers with a market cap above 200Billion. > 150k year job Try again. Also, my work is present in just about every data center in the world. Why be mad? I know where you're at on the ladder. I phone your types up at my company's internal dev labs when I need my chassis hard reset. We all have our jobs and roles user.
I thought I was clear. For gaming, ringbus is better. For heavy multitasking, meshbus scales better. I was clarifying the meaning of "scales better". You might have misunderstood. The post was prtty hasty.
Connor Baker
>For gaming, ringbus is better Not if the games start using 8 cores, they're already starting to use 6. Anything above 6 and ringbus goes to shit. Quad cores are obsolete.
Dominic Fisher
>I work at one of the largest data center hardware providers with a market cap above 200Billion. >I phone your types up at my company's internal dev labs when I need my chassis hard reset. I'm sure you do, which is why they dont have BMCs or managed PDUs allowing anyone to do this remotely.
ffs i have a pair of managed pdus at home, and you're larping that you dont have them at work
Jose Martin
Only at around 10 cores does ringbus become deprecated afaik, so there is still some time left.
Jordan Morgan
>parallel processing has a slim number of use cases And you're welcome to provide the slim number of use cases for AVX CPU based processing >Who the fuck does Layer 3 routing on fucking server compute hardware So, small appliance install number firewalls.... Yeah these really sell like hotcakes. Intel is a game changer, its taking over the firewall business > You dont understand what 100GbE is do you? It is four 25G channels bonded together, just as 40GbE is four 10G channels bonded together. I understand exactly what it is at the switch and I understand per your own diagram : What a fucking breakout cable is that reduces it down to 2x25GB per nic occupying two PCIE slots which can be intelligently slotted (not like the niggers at intel have it depicted) so that each nic hits a set die. As such, you have 2x25GB feeding one die which is more than capable of it. No kikery involved. I know this because I can run this configuration on a Threadripper with no such issues. > You clearly dont know how 40/100GbE works. I clearly do and you need to cut the larp out > Any data center with a firewall. Aka small box appliances that aren't game changers. Aka the kind of hardware that gets custom made by people like me based on a CPU churning through a lab for years and custom enterprise hw solutions evolving around it.
We done measuring dicks faggot? > MINE's BIGGER THAN YOURS
>desperate Intel shill trying to defend CCL-SP when even Intel admitted EPYC will hit them H2 2018 But how?
Lincoln Ward
What I'm saying pleb is that I work higher up the foodchain on developing the hardware/software/firmware that you just pictured and what I am speaking about is from that level not on much further down the food chain where you slide shit into a rack at a data center. The Validation I spoke of happens for years at a level far above your pay grade so that all you have to do is slide it into a rack and check temperatures when sensor alarms go off. Or call up tier one support for when something more seriously goes wrong (if you have that kind of contract) and wait until the call reaches my desk (an actual engineer).