400 Watts AMD 7nm Vega 20

wccftech.com/amd-7nm-vega-20-20-tflop-compute-estimation/

JUST

Attached: 1528486242015.jpg (653x726, 55K)

Other urls found in this thread:

devblogs.nvidia.com/inside-volta/
nvidia.com/en-us/gtc/
wccftech.com/amd-7nm-vega-20-20-tflop-compute-estimation/
disqus.com/by/disqus_N7/
ascii.jp/elem/000/001/706/1706234/index-2.html
twitter.com/SFWRedditImages

Attached: shintel.png (868x756, 265K)

AHAHAHAHA
ok this has to be a joke, right? like they won't make the same shitty product for 10 years in a row, RIGHT?!

Just bring infinity fabric to GPUs already

32gb 16000 stream processors behemoth when?

> 400w
> 20TFlops
> 20w/TFlop
sounds alright

Attached: findlet.jpg (645x773, 86K)

>Just bring infinity fabric to GPUs already
Vega uses infinity fabric already.
APUs use infinity fabric already.

>retarded amdrone brings cpus to a gpu discussion
never change amd panjeets

NOOOOOOO

Attached: 1436734391281.jpg (202x249, 30K)

GOY YOU CAN'T SAY THAT MY STOCK HOLDERS FORBID IT

>show pajeetfftech.com as a credible source
>amd pajeets never change
wat?
>sounds alright
vega 56 gets more GFLOPs than a P6000 with 40watts less tdp.

>Not even a rumor
>Literally speculation by the author
kys OP

>pojeeccftech
dropped

>>Not even a rumor
by Hassan the goatfucker

Can I get a quick rundown on why do Quadros, and Radeon Pros, and professional GPUs in general perform worse in gaymen and are generally discouraged for that workload? At least on the price/performance side of things.
What makes rendering stuff on CAD different than rendering stuff in game, or encoding video? I never found a clear answer, just people suggesting one or the other with no insight on the logic behind it, and AMD and Nvidia webpages are full of marketing buzzwords that muddy the waters even more.

Infinity fabric! = MCM
You should rather wait for amd to remove the infinite fabric from zen and replace it with an active interposer.

My understanding is consumer/gaymen GPUs focus on the lowest possible latency at the expense of maxium possible throughput and of error-free rendering. Which is what you want for games.
"Pro" GPUs forgo low latency for better throughput (so better performance when rendering video or whatever). Although most or all of the difference in how modern GPUs behave is just firmware and the hardware is basically identical.

Thanks

AYYMD HOUSEFIRES

devblogs.nvidia.com/inside-volta/

>The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope.

nvidia.com/en-us/gtc/

>See You at GTC 2019, March 18-22.

Nvidia's TSMC 7nm Volta successor will be launch in March, almost 2 years since Volta's 2017 launch

The GTX 1080 ti is 11.3TFLOPS at 265w~ actual sustained average power draw under a heavy load.
400w for 20TFLOPS isn't even remotely bad. Given how power consumption scales with clocks, this would still likely be north of 15TFLOPS at 300w~. Nothing to scoff at.

Impressive for TSMC's pipecleaner.

Yep, Professional GPUs are built for accuracy and the silicon itself is from top quality yields (lower power consumption/no defects). The firmware has vendor-locks for professional-tier software that is intended to run with.

It says 300-350w for 20tflops

that bould be the 1st card with more cumpute power than the dual fiji

1080 ti:
42gf/watt

Vega 20 at 400 watts:
50gf/watt
350w:
57gf/watt
300w:
66gf/watt

>A HPC powerhouse
>A HPC
>A
God damn this guy is retarded

>comparing a gaming GPU to a workstation GPU
>comparing a gaming GPU to RUMORS about a workstation GPU

>wccftech.com/amd-7nm-vega-20-20-tflop-compute-estimation/
These are estimations based off of GloFo's 7nm, which is better than TSMC's, aren't they?

The real numbers are likely more like:
17TFLOPs fp32
8TFLOPs fp64
34TFLOPs half precision (int16, fp16)
68TFLOPs simulated tensors
300W TDP

But I guess they could up power consumption for 3 TFLOPs more. I just don't find it that likely. HPC market seems to be comfortable with the 300W TDP range for high end GPUs.

A lot of BS in this article, it seems to me
>If the Vega 20 GPU was to be made on a 14nm LPP process, it would have measured at 720mm2. If the same calculation is driven to form a relation between the GPU die and Compute units, we would get up to 88 CUs (up to 96 CUs) on such a chip That’s a 37.5% jump from the current 64 CU design on the Vega 10.
There's been a 100CU Vega20 spotted, which seems to be 2 50CU dies on a single card.
Which strongly implies that it's 64CU still, but TSMC yields means they needed to cut out 14 instead of 8 CUs on their lower end to salvage more usable dies.

They come to this wrong conclusion again because..
> But here’s the thing, according to GloFo, 7nm LPP offers a 2.8 times increase in logic density
TSMC's seems to be more around a bit over a 2x increase in density. It's not as dense as Glofo's. So no, it's not fucking more CUs you dumb fuckning curry eaters.

HOW DID THEY WRITE ALL THIS FUCKING GARBAGE WITHOUT EVEN KNOW IT'S TSMC'S 7nm AND NOT GLOFOS?
That site seriously needs to be shut down. It's just a garbage mill.

you are N7+ ?

disqus.com/by/disqus_N7/

>on Jow Forums
>expecting quality content

ayy lma-
hmm

can someone explain the current relation between Radeon and AMD? Isn't Radeon a separate company now?

checked

No, its just a formal division within AMD and has separate internal management from the CPU division of the company.

ah, k. thanks for explaining

amd doesn't seem to have pulled an nvidia when nvidia acquired voodoo. that acquisition betterized nvidia's shit. then again, I was a kiddie when that was happening. in my day and age all there was was amd and nvidia and intel

ECC and lower clock speeds, coupled with the fact that ISV takes most of the cost of professional parts; deploying on demand support and qualification of parts is necessary in the professional world.

CAD uses a large number of old FFP pathways to render edges, this is where the professional cards really excel, rendering with edges. Most D3D/OpenGL viewports use shaders as opposed to FFP.

WCCF is estimating based off of extra die space and a 400W estimate would be very egregious considering die shrink. I'd give it 200W, same number of CUs, perhaps a dual chip configuration though.

Raja developed the bulldozer of GPUs.

Have fun with him Intel.

>HOW DID THEY WRITE ALL THIS FUCKING GARBAGE WITHOUT EVEN KNOWING IT'S TSMC'S 7nm AND NOT GLOFOS?
This

Because they wanted the clicks and views to gain some shekels. Also, don't give them credit/miscredit for this, they copie... I mean, quoted a japanese article from ascii.jp which belongs to Kadokawa. Since I don't know moon I can't say what they were speculating here:

ascii.jp/elem/000/001/706/1706234/index-2.html

hahahaha such a funny discussion
hahaha intel vs amd vs nvidia, get it? such an epic topic to waste time! epiiic!!!! xD

>3 layers of irony

>his biased benchmarks didn't work
>his crying intel wojaks didn't work
>his greentext implications didn't work
>resorts to this

Absolutely pathetic, amdrones.

Nah, you want MCM more than anything. Imagine cute, efficient dies glued together to scale up however they want. Everyone would win with those yields.

Attached: 1514130395242.png (1600x1552, 2.55M)

Attached: 1454039405929.png (808x805, 49K)

Attached: 1452799837787.png (512x800, 650K)

The article does estimate that it will be out Q3 or Q4 2018 and that's very likely on spot. Kernel support for Vega20 is in place as of current git and will be included in 4.18 when released and MESA support for it is also in place and will be part of the Mesa 18.2 release. PCI-IDs for these upcoming cards and the features that differentiate them from earlier VEGAs can easily be read from mesa git commit 2e0b00ab7d135f393c0cf7531317100f91725ffc so there's no reason call something speculation when the facts are out. Prototypes are ready and cards are usually already in production when driver support is put in place. AMD is very aggressive about their open source strategy these days (unlike nvidia). This means it's possible to glean a thing or two just by looking a their driver code.

Attached: 8012616222_a59ec94002_b.jpg (576x1024, 220K)

Mostly because of the drivers. The professional drivers are focused on implementation correctness, which doesn't sit well with games that would benefit from a few performance hacks. Also, the focus is on tuning them for professional applications, not games. I don't think they are even validated for games, save for the implicit "it's forked from the same codebase".
The difference in hardware is small, mostly stemming from the fact that the cards are designed by Nvidia and AMD themselves instead of AIBs and are usually of higher quality (and with about 10 years of guarantee).
Although (correctly) not recommended for games, the pro cards are sometimes interesting in certain niches, for example the Radeon Pro WX 7100 which should perform about the same or better than RX 570 while being only single-slot.

the simple truth is that it's mostly just about turning features on and off in the drivers to segment the market. this is specially true on the nvidia side. nvidia's actually got terms of service for their gaming drivers that say it's somehow illegal to use your own "gaming" segment card in specific applications - just to sell more expensive cards.

"Pro" cards do tend to have "certified" drivers but these days it doesn't really matter. That's why nvidia is actively trying to prevent people from using "consumer" cards in areas they'd like to sell "pro" cards.

I can't Wait for Active interposers also it's going to BTFO Intel and Nvidia !

The first one to achive it will win the market for a few years until the compition can make their own product with active interposers.

Reminds me of my r9 390 desu

where is vega 40 @ 12nm or 7nm?

NOOOOO YOU CAN'T POST ACCURATE ANALYSIS HERE
YOURE MESSING WITH NATURAL ORDER
GUYS WHAT DO WE DO NOW ?

So? Everyone has a 1000w+ PSU anyway.

Intel and AMD processors have built-in hardware backdoors. You shouldn't be arguing, you should be finding alternatives to this garbage.

Well, it doesn't align.
Vega 64 is 12.5Tflops for 250W
How do you get a process shrink and get worse perf/watt?

Vega54 on stock voltage will peak at 265w while gaming on the power saving BIOS.
On the standard BIOS its 290-300w.

Thats 43gflops per watt, 47 with the power saving BIOS.
Even if this Vega 20 were pulling 400w it'd be an improvement in perf/watt, at least in theoretical compute throughput

Well, my 64 has a 225W and 250W bios.
Then, if you increase the power limit in drivers, you may get to 300+.
From my experience, the card thermal throttles less than 5 minutes in anyways.

I'm confused. Is this an enterprise card? 32GB of RAM of a GPU is fucking pointless.

Yes, Vega20 is an enterprise/AI chip

Ah. Keeping that think cool in a rack is going to be almost impossible.
But honestly most enterprises don't seem to have caught on that their rackmount brain boxes are thermal throttling themselves to a stand-still, so I'm sure it will do fine.

Agreed.
It's borderline impossible to aircool a 300W GPU in a good tower.
Just ask cryptominers how it sounds when you put a lot of them together.

>OpenCL

Compute power is even lower with the power limit reduced that far. You're not going to hit the clocks necessary for 12.5TFLOPS at only 225w.
I don't want to give PakiTech too much credit, but assuming a 20TFLOP single GPU is real, even at 400 its showing impressive uplift. Massive performance uplift, we already know theres a huge reduction in die size, and perf/watt still increased.

AMD did tout a 35% performance uplift, and if they were going by straight compute instead of any benchmark, then just under 17TFLOP would be the target making this post pretty plausible:

I'm sorry, but I fail to see much perf/Watt gain.
Going from 14nm to 7 should certainly yield you more.
Or is 7nm a meme?
Because that would imply zen2 is gonna be trash.

>massive increase in density
>total perf up, likely straight from clocks
>perf/watt still improved
This is the first big die from TSMC on their 7nm line, its a pipe cleaner for the process. AMD's CPUs are run at Global Foundries on an entirely different process.

why are you guys always trying to find a way to talk shit about zen 2?
why don't you do the math and see the performance uplift of "just" 35% higher clocks, while not mentioning IPC at all??

Well, then, I imagine clocks are the real question mark here.

400 watts for 20 teraflops is a good deal.
I'm gonna buy it for my mining rig.

Look, we don't know clock for vega20, but it looks like trash, even if we imagine it's running the same clocks as vega64.
Zen2 is supposed to 5Ghz boost. I don't see it happening with that type of 7nm.

From what I understand of AMD's GCN architecture, they're pretty much stuck at 4096 SP.
What's really going on with that GPU consumption is that the memory controller is the cause of all your heating.
Because you can't just feed that architecture fast enough, it would seem.
Even HBM versions have to be watercooled to work nominally.
I now use my V64 for crypto, and it's very clear memory frequency is the variable here.

Had an absolute fucking stroke and did math nowhere close to right.
4096shaders x 2 ops per clock x 2500mhz would yield 20.48TFLOPS
Right around 2080mhz would yield 17TFLOPS

So that is a pretty significant clock increase for GCN. Vega 64 boosted to 1536mhz, could OC marginally higher. With no pipeline changes that clock speed increase to 2000mhz and beyond would show a shitton of promise for the process from TSMC.