Navi archtecture thread 2

Steve read our last thread and made a video rehashing it what we posted
youtube.com/watch?v=V7wwDnp8p6Y
it's pretty good. David Kanter is a cool guy.

previous thread

Attached: ApplicationFrameHost_2019-06-12_01-52-53.png (2047x1149, 1.51M)

Other urls found in this thread:

gpureport.cz/info/Graphics_Architecture_06102019.pdf
twitter.com/SFWRedditGifs

>Steve read our last thread
yeah no

hi steve

Attached: sds.jpg (1920x1080, 364K)

Is there another rdna pdf out there? The last one didn't have this slide.

Attached: leltracing.png (1251x707, 490K)

Slides PDF
>gpureport.cz/info/Graphics_Architecture_06102019.pdf
is on the architecture.
What you posted has pretty much nothing to do with the rdna arch. It's more a roadmap type thing.

It uses the same background image so there is a second document that talks more about the future of radeon. I wonder if that had also been leaked.

Can't just release the slides or make the talk public, nope, you gotta play telephone with a bunch clickbait journalists so they can make ad revenue

Oh they did jump to 2 scalar units per CU. I found that patent a couple years ago. Neat.

Yes, and 32 threads per cycle up over 16 with GCN and Turing.
Intel actually had variable wavefront size on their iGPUs, but it was 8/16/32 not 32/64.
I think it's a way to vary between absolute performance vs efficiency, and makes it easier to saturate the ALUs.

Rather, 32 per cycle over 16.

Volta/Turing/Pascal are all 32 per wave, every other cycle.
GCN is 64 per wave, every 4th cycle.
That's 16 per cycle for either. But notably that 64 size wave wastes power when unsaturated.

RDNA can do the wave size of either, at twice the cycle rate, depending on what the scheduler decides is better for performance or efficiency.

They say in the video they kept 64 due to retro compatibility for consoles.

most importantly this is half the size of vega and 15% less perfomance
they obviously made it with high yields in mind

>15% less performance?
wat?
The 5700 XT looks to be about 20% better performance than Vega 64 despite only having 40 CUs.

I think they went with the medium sized die for 1440p range first because they'll have some months to optimize driver before releasing the smaller and bigger one for the $250 and $700 price range.

in what universe ? vega 64 competes with 2080
if your 20% is real then it means vega 64 is below 2070 which is stupid

The fact they didn't do this back in 2013-14 is sad. Nvidia revamped their arch a few times when AMD was still pissing hot with GCN

I'm confused so it's kt as powerful as gcn at compute but better at raster?
Does that mean it's useless for Ray tracing? I'm confused why theyd pick now as a time to go compute light when nvidia and Intel are learning heavy into it

RX 5700 XT has 40 CU against 64 but clocks to 1900. I just hope we can overclock it to reach 2000MHz but they power locked Navi so thats a bummer. Hope Navi 20 doesn't come with this bulshit, it is the second reason I don't go for Nvidia, first is price.

Navi is 14% higher in performance per watt versus Vega 64. Not actual performance.

Are you guys fucking retarded? The 2080 is faster than a 1080TI. The Vega 64 is barely a match for the 1080 most of the time. The 5700XT is faster than a 2070 which is faster than a 1080

>pick 64 wide waves
>have trouble saturating them and can't issue enough work to CUs
>surprised pikachu meme
Nvidia figured this out ages ago, from the start even. Why the hell didn't they just fix it? RDNA is something that could have been done after GCN Gen 2

Somebody is confusing between Vega 64 and Vega VII. The 5700XT should be between the two with better performance/power than both, we hope.

sounds like a shitty gayming gpu, Im buying used 1080ti for 450$ instead

also only independent real-world tests will settle it

the 5700xt is half the size of vega (almost) and gives 25% more perf than a vega

one of the big reasons for it is that they managed to find a way to split the 64 wavefront forcefully into 32 resulting in more active coresin more than half of less of the time it needed
a tldr amd basicly gave ACE and HWS a hardware sc of their own now they have 2 hardware sc the main one that handles pretty much everything
and the secondary that handles the async

Where the fuck are you getting these numbers?

Attached: veganavi.png (1919x1079, 941K)

that is a best case scenario based on vulcan dx12

which the majority is not
sure in dx12 and shit right NOW its only 15% but in few months it will skyrocket thats for sure

>split the 64 wavefront forcefully into 32 resulting
No. The scheduler can just issue 32-wides, but there's a 64 compatibility mode.

>ACE and HWS a hardware sc
wut, the ACE forms and dispatches the wavefronts. It IS the hardware scheduler. I'm not sure what exactly you're trying to say. The ACE is just more flexible now and can issue 32 wide wavefronts because the new "dual compute unit" has the resources to decode and issue work at a 32-wide granularity. The new dual compute unit is like 32x2. Hence why they've doubled up on decoders and issuers for the computing units within a CU, while sharing cache. It almost reminds me of bulldozer's FP arrangement within the modules.

I'm sure it took them a lot of work, but on the face I don't see how any of it revolutionary or anything. Their engineers got paid hundreds of millions to just make small tweaks to GCN for almost a decade now. It's unbelievable

the guy on the video literally said so that they can forcefully SPLIT it into 32 with navi
the ACE is the async hardware bci nothing more it didnt had the capacity to flip pause flush like the regular cores on gcn had because its l1 memory wasnt shared with the rest
but now it can since 2ace has a shared l1 like 2 cores have a shared l1 too

Just say from your ass. You're speculating with fantastical thinking. Stay with in the realm of reality, faggot.

>say from your ass
they literally state this on their notes you idiot

Vega64 is about 9% below 2070 for blower and 4% below for the Nitro+.

You're thinking of Radeon VII, which is 331mm^2 compared to the 255mm^2 of Navi, and Radeon VII tends to be slightly slower in games than the 2080 but faster in real applications.
Radeon VII die size is 30% larger but it's only like... 15% faster it seems?

I wouldn't be surprised if, with driver optimizations, Navi comes within single digit percentage of Radeon VII in games.

lmao?
It's literally optimized for games and worse for gpu acceleration.

Ahhh. That's interesting.
I also noticed a while back that in the leaks, Navi was running on Vega divers instead of some special internal beta drivers.

Are you trolling?

>[Nvidia] engineers got paid [billions] to just make small tweaks to [Kepler] for almost a decade now [outside of tensor cores and RT accelerator]

>They
Who the fuck is that?

did you bother reading the amd notes user ?

We know you didn't.

Price is still terrible and nobody should buy 1080 fourth time in a row.

>amd should lower the price for the same perfomance because fuck you logic

>you should pay same price for same performance 4 times in a row because fuck you goyim generations do not matter pay up

>no Ray Tracing support, not important though
>can just barely get past those Vega GPUs 1.5 years later when those are obviously cheaper nowadays
>AIBs gone dead silent after E3 but prices for custom cards will eventually be higher anyway
>still gets cucked by Turing Super
>still demands higher margin
>still doesn't care about the fact that AMD GPUs actually have near to zero market share
I mean, if Mommy Lisa likes sucking higher margin I will pleasantly answer by just buying an R7 or even a fucking Radeon Instinct anyway.
So what's the point of releasing the already crippled product for "mid-range segment"?

>It's literally optimized for games and worse for gpu acceleration.
So what is vega better at than Navi? Specific examples please.

memory bandwidth

>memory bandwidth
>483.8 GB/s on nitro+
>448 GB/s on new navi cards
thats good for what exactly?

>can just barely get past vega gpus
>has more perf than a 2070 more than a vega and consumer 85 LESS watts than vega while offering more perf
>aib gone dead silent
>aibs never showcased actual gpus when amd paper launched but we saw them at computex in case you forgot...........
>obviously targets super lineup since they are offering the same uplift as 5700xt does
>amd is bad cause they want profit with a good product

looking back on this announcement, it's been a long ass time since a mid-range next-gen AMD GPU came super close to a top-end GPU from the previous generation (VII)

>mid-range next-gen AMD GPU
>449$
>mid range
I hate consumers for eating this shit up

it's a mid range product with a high end price
>I hate consumers for eating this shit up
yea you and me both

>it's a mid range product with a high end price
that we can agree on

Fuck off at least you don't have to deal with $10k quadros.

all this goddamn shitposting right now..
remember that Nvidia gonna release a SUPER version of the 2080/2070/2060 for the original MSRP of those cards. older versions gonna receive a small (VERY small) pricecut

Attached: mads.png (864x1230, 1.09M)

>nivida lost 2B in revenue after crypto crash
>nvidia admitting RTX under performed greatly
>AMD: "let's follow nvidia example and see where it gets us"
I want to see how AMD GPU and market share drops even further. Because i'm tried of their retardation concerning GPUs. They did it over and over again: catch up to nvidia but keep the prices of nvidia within 10%, then cry nobody buys their shit.

give it to me straight Jow Forums, I just want something that will play modern games at 144fps on low settings, and 60fps on high settings. Should I just get an rx580/vega56 or should I wait for navi?

What resolution?
On 1080p both can meet your requirement, on 1440p or above only Vega56 can do it.
No place for Navi unless you are using much older GPU, but because of the stupid price tag on Navi, you'd like to scarp one brand new Vega before the supply runs out.

>first thing nvidia did on their presentation was to shit on 5700xt changing its tdp to tgp and keeping theirs as a tdp

yeah nvidia is getting assblasted this time around

Mainly datacenter workloads at this point.
Also workstation Vega's have enormous FP64, even the Radeon 7 in double precision tasks will slap a Titan RTX silly.

>Ray tracing leveraging cloud computing
Stupid.

I'm asking as a user, not a corporation, what any advantage does vega have over navi.

>if your 20% is real then it means vega 64 is below 2070 which is stupid
holy shit, COPE

>NO HDMI 2.1
>NO VIRTUALLINK
>NO VARIABLE RATE SHADING
>NO RAY TRACING
>225W HOUSEFIRES BARELY MATCHING COMPETITION IN PERFORMANCE

OH NO NO NO NO NO NO NO NO NO NO NO

AHAHAHAHAHAHAHAHAHAHAHAHAHAHA

Are you retarded? They moved tdp values for both brand GPUs to TGP and moved package power (or asic power) to TDP.

Because sony

>GPU test System update may 2019

You talk about Radeon 7

Attached: relative-performance_2560-1440 (3).png (500x810, 47K)

>>NO HDMI 2.1
Useless give me display port 1.5
>>NO VIRTUALLINK
Nothing uses it
>>NO VARIABLE RATE SHADING
Useless I tried it on wolf 2 doesn't save perf or image quality for fps
>>NO RAY TRACING
Useless nothing uses it outside of rushed demos
W HOUSEFIRES BARELY MATCHING COMPETITION IN PERFORMANCE
My 2080ti uses 50watt more and it's twice as fast this is where amd really fucked up
>OH NO NO NO NO NO NO NO NO NO NO NO
>AHAHAHAHAHAHAHAHAHAHAHAHAHAHA

Don't forget that they even have the
>PROCESS NODE ADVANTAGE

>Useless give me display port 1.5
Doesn't exist yet. Meanwhile Hdmi 2.1 is already in TVs that shipped a few months ago. And HDMI 2.1 has 2x the effective bandwidth of displayport 1.4 as well as built in support for variable frame rate and eARC

He just told you. If you don't understand what that means then you're not the right user for this product. Of course, there are more than one kind of user so which one were you again?

>got a vega 64 that costs 71euros more than a good RTX 2060
AAAAAAAAAAAAAAAHHHHHHHHHHHHHHH

look I'm just asking what vega is better at than vega, besides "corporation stuff" I don't even know what that entails.

Whats tgp

>unironically using techpowerup as a legit source of perfomance when their suit is basicly filled with nvidia gameworks games