/scg/ SuperComputers General

America won again.
Linux kernel won again.

>200 PETAFLOPS

omgubuntu.co.uk/2018/06/summit-supercomputer-red-hat-linux

Attached: ibm-supercomputer.jpg (2894x1929, 764K)

Other urls found in this thread:

fudzilla.com/news/graphics/41420-nvidia-nvlink-2-0-arrives-in-ibm-servers-next-year
slideshare.net/albertspijkers/power9-vug
fudzilla.com/news/ai/46244-ibm-ac922-power-9-server-has-6-nvidia-v100
omgubuntu.co.uk/2018/06/summit-supercomputer-red-hat-linux
ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/61ad9cf2-c6a3-4d2c-b779-61ff0266d32a/page/1cb956e8-4160-4bea-a956-e51490c2b920/attachment/9a8eabe8-10b8-4a8e-aebe-3a3845b71c0f/media/Power Systems AC922 Intro 11December2017 .pdf
twitter.com/SFWRedditImages

How much crypto would this mine?

>2 power9s and 6 tesla V100s per node
>4608 nodes
Sounds like the GPUs are doing most of the heavy lifting. Why use power9s, I thought they were dogshit outside of a datacenter?

What a waste of money.

And Sierra is supposed to be operational soon too. 125 petaflops for nuclear weapons research.

NVLink fast memory coherent connection Power9 to Volta GPU,massive I/O.

Your life is a waste of money you braindead boomer. I bet you'd rather feed the poor .

Recommended requirements for the gnomeshell

When is brain emulation? This has to be enough to run something close to brain simulation or at least to model something close.

9,216 IBM POWER9 CPUs

202,752 POWER9 cores

27,648 NVIDIA Volta GPUs

Summit also has10 petabytes of memoryand250 petabytes of storage. Oh, andthose internal NVIDIA GPUs? They communicate with the IBM CPUs over anNvidiaNVLink interconnect, whichhas through-put speeds of 300GB/s — a colossal 10x faster than PCIe.

Gonna have to say that if there's ever a crazy apocalypse scenario, I'm going to find wherever they're keeping summit. The power backup system for a machine like that could power a small settlement probably for a few years.

>Teslas make up 97% of DPFP raw performance
>less than 50% of 15MW power consumption
>each water cooled 4GHz power9 22-core chip can't even crank out a single TFLOP of DPFP raw performance
Bravo IBM, you couldn't disappoint us more even if you tried

Attached: Comb12062018104540.jpg (1440x2560, 465K)

>IBM still makes hardware
N-Nani?...Someone explain

So these shitty CPUs are pretty much bridges between the GPUs? lmao

Attached: 1520980633175.png (171x278, 93K)

> not understanding what a fucking CPU is used for in such a setup
Fucking derp

What's funny? LOL, it's pretty much what CPUs are meant for in such applications

Being this much of an idiot...

Attached: power9_interconnects.jpg (500x311, 20K)

I know they're no longer the horses pulling the carriage but they could've used ebyns, not have had to use fucking water cooling, and gotten power consumption down to like 10MW. Isn't optimizing for lower power consumption supposed to be a priority?

Attached: 1523903571613.png (423x384, 149K)

Or a custom ARM implementation (Tegras?)

NVLink > PCIe, I guess.

Power9 has NVLink, that's what this was designed around. Epyc has less IO bandwidth (despite having many more PCIE lanes)

What the fuck is this even used for?

Look up how IBM zSeries mainframes work. With AMD64, you can't remove nodes and upgrade parts of the machine without the entire system going down. With POWER, you can. POWER is far more resilient than mass-market AMD64 processors. Also, NVLink.

gaymes

Not sure they could handle the I/O. Anyway power9 was supposed to be hot shit (hehe) and pose a threat to x86 yet they're probably consuming a quarter of the power of the entire fucking super computer with those high clocks and all just to barely match x86 server performance at the cost of like 500W power consumption per chip.

Does it really matter irl? 16GB/s is a ton of bandwidth and some buttcoin miners even have their GPUs connected through 10gps usb 3.1.

installing gentoo

forgot pic related

Attached: ethereum-mining-rig.jpg (820x614, 109K)

waifu2x rendering

Shit would be cash. I'd love to use it on something like Aria.

Just been to Supermicro and Dell websites, both offer Xeon servers with nvlinked v100s, no need for freetarded POWER or stinky Epyc

Cache coherent means easy programming model

fudzilla.com/news/graphics/41420-nvidia-nvlink-2-0-arrives-in-ibm-servers-next-year

slideshare.net/albertspijkers/power9-vug

fudzilla.com/news/ai/46244-ibm-ac922-power-9-server-has-6-nvidia-v100

Attached: IBMA922diag.jpg (1372x763, 146K)

power9's don't have hardware backdoors

2

They probably have hardware acc backdoors.

>omgubuntu.co.uk/2018/06/summit-supercomputer-red-hat-linux
nice. but gpus will be limited to running CUDA/OCL code so it would also mean a lack of versatility in general purpose computing.

>evaporative cooling towers

Intel inside

Well that fucking blows, too bad intel couldn't unfuck their x86 AVX-512 frankenstines fast enough, they looked promising and more general purpose computing friendly.

even worse, power9

Attached: Screenshot_2018-06-12-12-01-38.png (720x1280, 400K)

Crysis ultra 16K@3600fps

You're joking right?
Look at the pic I posted here : See the Nvlink Phy on die? Ebyns don't have that.
As for cooling, it's mainly the Nvidia chips which out number and out power consume the power9.

Get back to school please

Weather simulations, atomic research, and all sorts of other non-listed large scale compute tasks. They also contract portions of it out to private companies

Dam, it still is summer. Power9 does not consume alot of power and wasn't supposed to replace x86 at the consumer level .. At the enterprise level, it holds its on weight. The aggregate power consumption is for CPU, networking gear, and everything else goofball not just some dumbass MS calculator substraction math.

Please link me to source proving nvlink will actually matter because some HPCs used for mining have been using 10gps 3.1 riser setups for a while now.

Also V100s have a power consumption of 250 watts max and state even the nvlink variants use passive cooling for their 7 TFLOP DPFP performance, see for rough math. Power9s are absolute housefires here.

How old are you?
Do you have a STEM Degree?
Do you have any fucking clue what HPC is and subsequent architectures?
Do you have any clue what you're talking about?
Are you being unironic right now?

>However, part of the partnership means there’s room to tweak the architecture along the way. For example, NVIDIA recognized during the Summit project that AI would become an important tool for science, and continued to innovate, delivering beyond the original 2014 specifications for Summit.
>Engineers included 640 Tensor Cores in each Volta GPU in Summit, allowing scientists the option to use FP16 precision not only in AI applications but also for simulation that depend heavily on matrix multiplications.
>This flexibility of simulation and AI in one single system makes Summit unique in its design.

Each node consumes an average of ~3.3kW. 6 V100s consume half of that. That leaves 2 power9s to share more than 1.5kW with networking gear and shit, at minimum each of those housefires is chewing threw 500 watts.

I'm old enough to know power9 is shit and this nvlink thing might all just be a stupid useless gimmick if is a thing. But I do admit I tend to overstep my scope of tech knowledge sometimes so I apologize for that.

You unironically believe a power9 consumers 500 watts in such a packages?

LMFAO

My crude math points to it but if you can prove otherwise then please do.

Attached: 1522205050095.jpg (700x792, 84K)

> But I do admit I tend to overstep my scope of tech knowledge sometimes so I apologize for that.
Yes, you're way beyond your scope of understanding in that you're unironically commenting about HPC computing and comparing it to shitcoin mining. Nvlink is not a gimmick when you're doing real and highly demanding compute on GPUs.

Power9 isn't shit because you've likely never done anything with it in your life because they mainly serve the HPC market.

Attached: nvlink_3DFFT_perf.png (561x342, 22K)

> • 2 POWER9 Processors => 250W modules
ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/61ad9cf2-c6a3-4d2c-b779-61ff0266d32a/page/1cb956e8-4160-4bea-a956-e51490c2b920/attachment/9a8eabe8-10b8-4a8e-aebe-3a3845b71c0f/media/Power Systems AC922 Intro 11December2017 .pdf

Please learn how to google...
POWER AC922
• 2 POWER9 Processors
- 190, 250W modules
• 4-6 NVidia “Volta” GPU’s
- 300W, SXM2 Form Factor, NVLink 2.0
• 6 GPU configuration, water cooled
• 4 GPU configuration, air or water
cooled
• 2 Gen4 x16 HHHL PCIe, CAPI enabled
• 1 Gen4 x4 HHHL PCIe
• 1 Gen4 Shared x8 PCIe adapter
• 16 IS DIMM’s
- 8, 16, 32, 64, 128GB DIMMs
• 2 SATA SFF HDD / SSD
• 2 2200W power supplies
- 200 VAC, 277VAC, 400VDC input
- N+1 Redundant
• Second generation BMC Support
Structure
• Pluggable NVMe storage adapter
option

What's the max power consumption on 22c water cooled power9s on full MT load? I know it say's 250W modules but power9 has poor performamce/watt compared to an ebyn setup afaik.

>limited to snowflake GPU compatible workloads
kek

Here's to hoping intel will revive xeon phi from the grave and make Aurora 21 the first x86 CPU heavy 1 exaFLOP supercomputer in 3 years.

Attached: aurora-exascale-schedule-748x552.png (748x552, 333K)

Yeah, of a fruit fly

Mining does zero use of bandwidth outside the GPU, so you can run then on whatever, most HPC workloads need staggering amounts of bandwidth outside the GPU