Vega 20 GPU Estimated To Feature 20 TFLOPs of Compute

Are you ready Jow Forums?

7nm ready to BTFO out nvidia

Attached: AMD-Vega-20-7nm.jpg (500x245, 64K)

Other urls found in this thread:

wccftech.com/amd-7nm-vega-20-20-tflop-compute-estimation/
twitter.com/KOMACHI_ENSAKA/status/1036024079084482561
seekingalpha.com/article/4205859-advanced-micro-devices-inc-amd-management-presents-2018-deutsche-bank-technology-conference?part=single
old.reddit.com/r/Amd/comments/6vdf14/why_fp16_units_are_still_disabled_on_gcn34/
bugzilla.mozilla.org/show_bug.cgi?id=1461268
developer.apple.com/macos/whats-new/
twitter.com/NSFWRedditImage

wccftech.com/amd-7nm-vega-20-20-tflop-compute-estimation/

If it's an instinct card no one gives a fuck.

Unless its a whole new design, which is unlikely, its Vega64 ported to 7nm with some hardware fixes.
To reach 20TFLOPs single precision however the clocks would have to be around 2450mhz, over a 900mhz uplift compared to 14nm Vega. Thats absolutely tremendous uplift for the 7nm process at TSMC.

AMD did tout something like a 35% performance increase, but I don't remember what metric they based that on. Its a good sign for future 7nm parts either way, but we'll likely never see this Vega part reach the consumer market.

Why aren't wccftech sources banned from Jow Forums?

I'd be more interested in a
>My dad works at Nintendo and says Vega will be 20 TFLOPs
post from a random user. It has more credibility.

It actually is a bespoke design, and not a simple die shrink. If it were 14nm, the die would actually be almost 70% larger given how much more is there. The cores are proportionally much larger to handle quad rate fp8, among other changes.
It's as similar to Vega 14nm as the Polaris PS4 chip is as similar to the RX480. Similar enough to still call it Vega, but still very different.

>just wait
>this time nvidia is finished

>larger CU size
In what way?
I doubt they'd change the ALU configuration without naming it a whole new generation of arch.

What if they just "glued" two shrunken cores together like they do with their CPUs?

twitter.com/KOMACHI_ENSAKA/status/1036024079084482561

>Why people say "Vega 20 has 20TFlops!"? That does Not make sense. Vega 20 Radeon Instinct card Has around 16.5TFlops. Not 20TFlops.

>Believing the lies of AYYMDPOORFAGS

20TFLOP figure didn't come from WCCFtech, it come from a slide deck straight from AMD earlier in the year or late last year.

No such thing

You'd need to have more than 4096 shaders and we all know GCN can't scale past 4096 shaders because of limitations of a bad architecture

>estimated

aka: it won't hit it.

This slide deck on one slide stated the 20TFLOP figure, thats where it originally came from.
And yes they could hit that compute throughout figure if clocks were high enough with just 64CU.

Attached: debbdef7_AMD-VEGA-10-VEGA20-VEGA-11-NAVI-roadmap.jpg (1562x855, 222K)

Sauce

Like this matters now that king pajeet is at Intel.

I don't know, maybe they'll be everything and says they're gonna be. I feel after Vega's disappointing release that, we might have to wait another generation of cards before AMD eats Nvidia's lunch. Like the Ryzen 1 launch showed that Intel had a serious competitor, but it wasn't until ryzen 2 that AMD I'd now stealing Intel's marketshare. Their stock price has more than doubled from the Ryzen 1 launch till now, and I hope with that increase in cash on hand they bring the fight to Nvidia. Not because I'm an amd fan or anything, but the competition in the market will encourage Intel and Nvidia to actually start innovating again.

>this slide deck says 20TF
>it doesn't say 20TF anywhere
Hurr?

>No 20TF on slide anywhere

Told you no such thing, you lying piece of shit

AMD's never going to catch up to Nvidia. During an interview with that retard from Mad Money Lisa said they're working very hard with the console makers on the next gen consoles. It's pretty clear that they're focusing on the needs of their semi-custom clients rather than trying to match Nvidia. That means cost effectiveness and client-specific features rather than competing head to head and creating emerging markets. Since Raja left I don't have high hopes for AMD anymore. He was the one clearly pushing new innovation (PS), machine learning competitive, and FOSS at RTG. The rumors that Lisa forced him to dedicate 2/3 of RTG to consoles were probably true after all.

digits confirm, at best amd will still compete somehow in the budget area

Sounds more like the truth than some 20TF pie in the sky nonsense.

Look it up, faggot.

AMD CFO said they are launching it this year.
seekingalpha.com/article/4205859-advanced-micro-devices-inc-amd-management-presents-2018-deutsche-bank-technology-conference?part=single

It's not a consumer card, they can launch it at whatever quantity they want to selected customers.

Mommy!

Attached: 1529567124804.jpg (757x627, 87K)

>AMD edition of Volta for enterprise only
Who fucking cares

You mean like how they changed ALUs for PS4 Pro but didn't call it Polaris?
Oh wait, they still call it Polaris, like I said.

Nigger the PS4's GPU is still the exact same basic CU arrangement. 64 ALUs per CU. The family is Sea Islands, a small generational upgrade over the original Southern Islands.
The PS4 Pro's GPU is no departure from anything, still exactly the same as the rest of the GCN family in that regard.
Stay in /v/ child.

But the PS4s CUs are physically larger and can do double rate half precision, you tard.

Poolaris can't do that, AYYMD even disabled FP16 support in the drivers

old.reddit.com/r/Amd/comments/6vdf14/why_fp16_units_are_still_disabled_on_gcn34/

TOP KEK

BUYING AYYMD GARBAGE THAT REMOVED FEATURES AND CRIPPLES YOUR CARD

Not him but you sound retarded. The PS4 and Pro uarchs are different and have features between the main GCN generations.

Lol it doesn't matter. All machine learning work is done on nvidia because tensorflow, keras, and pytorch all run with cuda and nothing else

they literally just mirrored the ALUs on PS4 Pro.

nvidia is performing below expectations

Yeah, because OpenCL is not used at all anywhere and everybody loves proprietary shit.

Nvidia is literally the only GPU you can use for machine learning, because no real ML libraries support non-nvidia GPU acceleration.

>OpenCL is not used at all anywhere
It really isn't.

AMD just replace OpenCL by RCoM

>400W TDP

First gen Vega and RTX are both powerhouses, and as much as I want 7nm Vega to be a good consumer card without the RTX memes, it's going to be an absolute housefire no matter what it seems.

>Vega 56 is out
>Vega 64 is out

>Vega 20 is a big deal?
I don't get it, what's so special? I should be weak, right?

bugzilla.mozilla.org/show_bug.cgi?id=1461268

AYYMD removes features and cripples Poodeon cards, garbage architecture for garbage user base

It isn't. If neither Tensorflow or Pytorch supports it, it's not used.

The PS4 Pro's uarch isn't even the same GCN revision as the PS4/Slim. It physically supports double rate FP16 (aka Rapid Packed Math), a Vega feature before Vega was even released. The "mirror" shit was just an explanation on how they're handling compatibility with basic PS4 games not designed for the Pro.

4 dies HBM,7nm,1/2 DPFloat and for HPC

Vega 56 and 64 are product names. Vega 10 (the die used by Vega 56 and Vega 64) and Vega 20 are GPU/die names, and so far it seems like the first number refers to generation.

Yeah, thanks.
Don't know how this other user is so retarded and can't read.
PS4s arch is significantly different, but it's still called Polaris.
Just like Vega 7nm is significantly different, and not a simple die shrink, but is still called Vega.

Nice one

>OpenCL is not used at all anywhere
It's really not. Companies like Adobe only used it because Apple basically forced them but support is dodgy now, which makes sense because Apple says they're moving to their own API. Hell even AMD barely supports it. Intel had OpenCL 2.x feature complete drivers for like 2 years and AMD I think is just now catching up.

The reality is that Nvidia is the only company taking GPGPU seriously, which is why CUDA and Nvidia own every industry that use GPGPU. Raja was the only one at AMD who cared and now that the new guys are in charge they're slowing support for everything.

Just give me a card that will max out 1440p for $200.

Nvidia definitely won't.

developer.apple.com/macos/whats-new/

>Deprecation of OpenGL and OpenCL
>Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders.

Nobody uses OpenCL garbage unless they want to be fired, it's not even universal API now that it doesn't work on Mac now

AMD could release the absolute best GPU ever and Nvidiot would still outsell them.

but how many gigarays?

AMD 0
Nvidia 1

Mac is not a relevant platform

This

No, it isn't significantly different, kid.
Adding a couple intergenerational instruction sets isn't even new for AMD. The high level arch literally didn't change at all. Same registers, same caches, same ALUs, same scalar unit, same ROPs, same TMUs.
Stop talking out of your ass.

compute computing doesn t automaticly translate to real world performance.

Raja is the one who failed on nearly every front. And when he tried to stage a coup to rip RTG from AMD to be bought out by Intel because Su had a decision to make he didn't agree with. So he cut bait and ran to Intel like the failure he is. So instead of keep on throwing good money after bad in to a snake oil salesman who could never compete with Nvidia. Or get fully behind Zen and actually make some fucking money by ripping 10% of the market share from Intel. Guess what? She made the right decision. Because AMD is now floating in cash with 30% of the market in server's.

Vega is a pile of shit. Period. It's hot, it's power hungry, and it's only good at number crunching. Instinct Nodes are laughed out if the building because Nvidia is that far ahead.

Mac is not a relevant platform for 3D?

Are y'all fucking retarded?

>talking about OpenCL
>Mac is not a relevant platform for 3D?

Are YOU fucking retarded?

Don't vulkan kinda replace OpenCL?

Not kinda. Everything is switching over to Vulkan except for everything nvidia can pay to keep on CUDA.

Demonstrably false, unless you are everyone.

Not that guy but macos has been shrinking its market share relative to its competition for almost 7 years now. What you have to keep in mind is that's as the entire personal computer market has been contracting so it's even more damning.
When the iBook ARM drops what's going to become of professional macos users? I don't see a future there even if they support some vestigial mac pro on x86, the support is likely to be dogshit like current macos is right now.

Now? God no.
Future? That's what Khronos says anyway, that existing OpenCL functionality will be merged into Vulkan and they'll be a single API.

>when he tried to stage a coup to rip RTG from AMD to be bought out by Intel

I like how burgerflippers who know nothing about the corporate world think this is even remotely possible.

>infinity fabric GPU
another 5 milliseconds of latency for an AMD system

it's not a consumer card. Just another option for ML type applications. Could do well but they are behind cuda.

My dad works at Nintendo and said Vega will be 350K TFLOPs!!

400 watts for 20 tflops is pretty good considering vega 10 is only like 12 for 350 watts

t.

Attached: 1487720349817.png (900x600, 492K)

The new guys are ruining everything...

>max out 1440p
if we are talking 60fps amd made that almost 6 years ago it was called the "290x"

Direct X 12 supports Ray Tracing
Vulkan supports Ray Tracing
OpenGL is talking about implementing Ray Tracing.

AMD does not support Ray Tracing.
"When you die and your whole life flashes before your eyes, how much of it do you want to NOT have ray tracing?"

watch out, man. that guy is Anonymous. I've heard they are legion and to expoct him

You're deep into the commitment to this retardation, huh.

>AMD does not support Ray Tracing.
Yes they do. Stop lying.

but does it do it in real time, and how much of it will be in your life?

>meme tracing

Yeah I too love paying upwards of 1200 USD to play games at 30~40 fps at 1080p.

Maybe if you stopped being poor, even food stamp beggars can afford 1200 USD iphones and starbucks.

350KKK MFLOPS

It's a response to volta it won't have raytracing bullshit like rtx either thank fuck.
I love how they cite cost as the reason it's not coming to gamers yet nvidia charge $1200usd for a cut down card with shit memory
W-what if amd put gddr6 on the gaming version?

lets be clear once more
lisa already said that vega THIS vega wont be on any case a consumer card...

navi yes fucking navi that will come at H2 2019 will be the consumer card and it wont be a gcn card

can we please stop with the bullshit already?

>and it wont be a gcn card
impossible. it will certainty be better than vega, but they can't drop GCN, consoles live and breathe GCN

pls do the nedful adn delet this

Attached: 1471006318398.jpg (866x1300, 235K)

This
It's just optimised Gcn with mess bottlenecks (hopefully)

if you even bothered to watch what lisa said on computex and what the new head of rtg said you would have known

navi it wont be a gcn at least not the consumer cards because the limitation 4096sp and the fact that the cores are way advanced it keeps it from clocking high enough

Makes sense since it's been bottlenecking them since 290x and fury days

the real problem of the card is that the culling mechanism needs a lot of die space something that current cards cant give unless they sacrifice their hardware sc for it like nvidia did
and removing the hardware sc is a big no no for amd

Look at this retard and laugh at him

GDDR6 is the best memory for consumer cards, it's actually manufacturable and yields well unlike HBM2 garbage

There's a reason it's only used in the highest end GPUs, not even AYYMD HOUSEFIRES can use it in mass market low end GPUs because the cost is prohibitive

AMD is just controlled opposition

Intel and Nvidia will always win

Based amd
So what now bigger dies on 7nm? It's the only way forward at this point maybe the entire front of the card will be various interconnects and dies and shit it's the only way forward at this point

The 2080 has absolutely shit memory it's bottlenecked even compared to the 1080ti you fucking brainlet
It should have been a 12-16gb card just like the 2080ti

>30%
bullshit

>intel
>2017+1+
>winning
Someone post the doesn't matter wall of green text please

well there are some interesting patents laying around if they found a way to keep the cores as advanced as they are and being able to clock higher then they will shit on nvidia
but knowning amd this will be impossible cause they have a very small drivers team

No and you're a retard

SK Hynix and Micron don't manufacture 16Gb/2GB GDDR6 memory modules so there's no way RTX 2080 can have 16GB of GDDR6

When they finally catch up to Samsung that supplies 16Gb/2GB GDDR6 to Nvidia, then you can talk

Chinese do not buy intel anymore.

not only because intel is banned from selling to them
but because they invested almost 8 years ago quite a lot of money on amd so it was natural..

They can. The Quadro RTX 8000 can run 48GB
RTX 6000 uses 24GB

Do you even READ?

SK Hynix and MIcron don't produce 16Gb/2GB GDDR6 yet and Samsung certainly can't produce the quantities needed for mass market consumer GeForce cards

Quadro is lower volumes and certainly Samsung can meet that

2350mhz core clock