Go from 14nm to 7nm

>go from 14nm to 7nm
>double the memory bandwith
>only 18-20% performance increase over vega64

What the fuck is AMD doing?

Attached: 2019-01-29-image-2.png (1200x1000, 238K)

Other urls found in this thread:

techspot.com/review/1627-core-i5-8400-vs-ryzen-5-2600/page8.html
twitter.com/RyanSmithAT/status/1084174168835289088
game-debate.com/games/index.php?g_id=32470&compareGPU=Deep Rock Galactic
twitter.com/NSFWRedditImage

essentially making a filler product so they have something out to stall for navi.

Pretty much this.

AMD should just learn how to kode.

>7nm offers +25% of performance at the same power in comparison with 14nm
>7nm vega is 25% more powerful than 14nm vega
Wow, how could that happened

Stop using logic here user - you need to shitpost how AMD didn't automagically do better than what a foundry says its node will do when shrinking an existing design.

yeah wtf it's not twice as fast

Most of the GPU is probably still unused transistors from that broken hardware culling mechanism that the 64 had.

>What the fuck is AMD doing?
bumping clock speeds with the extra headroom they found

no idea why everyone is sperging out so hard over it

If this has sr-iov, it'll be interesting

>no idea why everyone is sperging out so hard over it
'Cuz its Nvidia and thus somehow its existence offends the mindless consumer whores of Jow Forums. 2080 performance for 2080 money is apparently not competition.

>What the fuck is AMD doing?
Absolutely nothing outside of gaming.
Would it hurt amd to make their own version of CUDA, and actually put the effort in getting that adopted?

You mean OpenCL? Nobody used it.

>Most of the GPU is probably still unused transistors
You're on to something. What really stood out to me during the CES keynote was the claim that it would have a whopping 20% performance increase over Vega64 in games and 80% better compute performance.

There's a big difference between a 20% and a 80% performance-increase. It's a re-purposed datacenter card (Instict MI50) so AMD was probably betting on datacenter compute or crypto mining (that bubble has burst..) when they designed it. That pareto 20/80% performance uplift tells me that there very likely will be a whole lot of unused transistors if you use this card for gaming.

It's called ROCm. AMD's even made a tool which lets you convert CUDA code to portable C++ ROCm code.

OpenCL was originally made by some fruit company who's abandoned it long ago, it's not a AMD product.

Attached: dreamcatcher-ohmygirl.webm (841x750, 2.86M)

>It's called ROCm.
Just because it exists doesn't mean that it isn't shit
>AMD's even made a tool which lets you convert CUDA code to portable C++ ROCm code.
And it's shit. You make it seem like it's an easy 1:1 transition, and omit the fact that you virtually have to rewrite the whole thing.
AMD shits out some
>hurr just fix this for us
type software again, and you're here pretending that it's another actual real life this time acshually works alternative to CUDA that will be heavily adopted.
Why can't AMD just learn how to kode?

Why are you so triggered by the mere mention of AMDs open source alternative to CUDA? Why are you so bothered by the fact that you're now able to machine learning with Tensorflow on AMD hardware?

Attached: Kyulkyung-black-03.webm (778x1080, 2.96M)

It won't.

Not triggered. It's just shit, and won't be adopted.

>7nm vega behaves like 7nm vega

>image not related to post.
This triggers me.

they are still using GCN with the same bottlenecks as in 2011.

>>only 18-20% performance increase over vega64
that's actually a lot from simple shrink, 7nm will be pretty good with actual products

i am glad amd sucks at gpus. my 1080 can survive next gen consoles too.

sounds like their holding back to dish out the incremental performance updates in various releases ...

Attached: 91b92fa2df.jpg (398x446, 70K)

fucking koreans look like ayys

Yeaah but there was this myth that Vega sucked because it's memory bandwidth starved. So much about that.

Isn't Pooga 2 just basically some badly binned non-consumer chips thrown together into "gaymen card" so that they'd have something to wave around and try to get people to believe they are still a player in the cosumer market?

This.

You mean gooder marceting?

There is no pooga 2. Its radeon 7

vega is canned.
it died with raja.
that's why.
it's just a die shrink.

imagine what nvidia can do with 7nm and 5nm EUV(5x-6x density of 16/14/12nm)

Yupp.
Watch AdoredTV's videos.

they'll charge 500 dollars for a 7nm GTX ??60

>go from 14nm to 7nm

You really believe that, user? They're the same transistors, they just started measuring them from a different point.

It all about density and Moore's law not Transistor size
Density still stick with the law

I doubt they still would do that pricing after the stock plummet
and Intel PS5 XB2 all in 2020

it's the same amount of cores, you retard
memory bw is for imbeciles to drool over

>it's NOT the same amount of cores as Vega64 or Vega56
>still provides 15% - 35% better gayman performance
ftfy

>Tensorflow on AMD hardware

Source? Last I checked you had to fuck around with compiling from source and it would still be slower than CUDA.

>3840 vs 4096
that is almost nothing
i don't see what the issue with the perf increase % is. it seems pretty predictable.

Navi will be shit to desu just like ryzen 8core performs the same as a 20$ 6core Xeon x58 CPU from 2010

Amd is literally 8 years behind threadripper is new progress but their consumer shit is still almost a decade behind

>Navi will be shit to desu just like ryzen 8core performs the same as a 20$ 6core Xeon x58 CPU from 2010
So you're saying that Navi will be on par with Nvidia's products from 2024?

this fucking goy shilling for free

Attached: 1537204806160.gif (108x147, 284K)

Vega 64 has 64 CU, Vega 7 has 60, so 93.75% of the CU of Vega 64.

>simple shrink
it's not, they upgraded the floating point part.

Pissing their pants

You can get the Xeon on Aliexpress right now free shipping 19$ performs the same in games as a over clocked 8core ryzen bro that costs 250-300$

You thought you was saving money going AMD but you literally wasted 200$ for no gain over a 20$ CPU yet you say paying 200$ more for a 9900k actual gains is overpriced lol

>gaymes
back to r/Intel you little niggerfaggot

You larp that you are a creative to justify ya wasted purchase

You encode 2 YouTube videos a year you wasted money on 8year old performance and ya shitty ryzen CPU will be put in consoles in 6months and complained about being so weak for the next 7 years

You fucked up you can run 4 vertual machines on a fucking core2duo you got ya ryzen for gaming don't lie

literally none of this is true, good job user

Yes it is watch the tech yes city video on x58 oc vs ryzen

Its literally the same fps and costs 20$

Absolutely gold, user. Hearty kek/10

gosh you sound so damn triggered, got nightmares thinking about consumers buying Ryzen instead of the blue turd?

Attached: 1536264127683.gif (287x713, 320K)

>will be put in consoles
Why is this relevant or should I care? I don't own a console.
>4 VMs on C2D
You either do nothing on them or suffer it being painfully slow, running 5 OSs on 2 old cores.

this but unironically. now watch until he moves his goalpost to laptop market like a good litte NPC he is

Attached: 1541343481938.jpg (736x1024, 73K)

14nm? Absolutely it's dogshit even compared to ancient 16nm pascal

Wrong it runs fine you brought the wrong computer sell it and save up for Intel

watch who? maybe link the vid
you're buying a second hand chip with an expensive motherboard cost that you have to source on some garbage website and comparing it to a newly released ryzen

>B-BUT IT PERFORMS THE SAME IN GAMES
then it performs the same as recent gen intel chips as well, since ryzen is maybe 1 gen behind intel on IPC (and is catching up with zen2)

techspot.com/review/1627-core-i5-8400-vs-ryzen-5-2600/page8.html

overclocked 2600 beats the 8400 in a 36 game sample, from an actual credible reviewer. your same shit argument could be applied to intel

guess what though retard? no-one wants to source some shitty old second hand xeon with an expensive motherboard from some retard on ebay that you have zero upgrade path for

>guy literally says channel name and video name
Demands he Link the video I'm dumb muh freedom

Cope harder you know the video exists and you know its true

>expects other people to go look for his shitty nobody source on youtube
kill yourself

Attached: 1526323055572.gif (245x245, 613K)

>Hides from truths that hurt his feelings

try to hide your damn faggotry for once, double nigger

Attached: dean dance.gif (500x500, 868K)

just ignore it, it's a placeholder product for the love of God
aren't you supposed to be the smarter than average power users of Jow Forums?

Little timmy know bad word

So you admit that you're wrong, then

In ironically thou I am the 20$ Xeon guy and I do want a radeon GPU because it has like 3-4ms less lag than nvid

I can deal with lower performance or worse AMD drivers n power bill I make optimal decisions and play2win

Going to wait to see if Intel new GPU frame times are better thou than even AMD first thou but that's like a year out but I might end up with a radeon 7 who knows

Lol still don't want to Google that YouTube channel "tech yes city" and x58 I see enjoy living in the dark

>worse AMD drivers

Attached: meme picture 0137.jpg (400x400, 50K)

Literally every game patch every says problems with AMD cards myb the drivers are fine but the support is crap software wise but bearable

>Literally every game patch every says
you have to be 18+ for thise website

Small problems happen because that's the GPU life. AMD cards are better off in that regard in the real world.
Also stop playing modern games.

This. Whether I get it or not depends entirely on this.

If I care about 3-4ms advantage I'm obviously not playing modern "LCD" games

I've used x58 xeons
they're fucking garbage, even if you can get your hands on the 1 functioning board that you could oc on its not worth it

Lol you are pissed off its true

twitter.com/RyanSmithAT/status/1084174168835289088

Nigger i used a dual xeon system for years and upgraded to a ryzen 1700 last year
you're a retarded shill who watched one video and made your mind up with """evidence"""

This
ROCm still slower at better hardware.
I wish it got same wide using as goyvidia CUDA.

Lol the pain it's not a duel Xeon its a single 6core Xeon from 2010 that costs 20$ and with a slight oc performs the same as a 300$ ryzen

You fucked up and brought hardware because of buzz and are tech illiterate

>ya
>ya
>ya
your post was already retarded enough

While x58 xeons don't perform the same as ryzen, you should know that you can OC with throttlestop on any dell board. The classic was just to get a dell dual socket mobo from a workstation, load it up with as much ECC RAM as it could take and use 2 X5675's and OC the everliving fuck out of them

>it's a dumpster dive Xeon shill episode

pretty enjoyable to see this ipajeet retard keep getting repeteadly btfo ngl desu

Why would an incel shill promote old hardware?

>it's a spot the consumerist episode

High CU count GCN is bottlenecked at front end of the rendering pipeline by only being able to process 4 triangles per clock. As a result, VII will be faster than V64 by roughly the proportion it sustainably clocks higher.

More would require fixing GCN's front end.

Vega's problem is that it has the same very low fixed function pipeline capacity that GCN has been limited to for the last 5+ years. Some try to argue that it's ROP count, but realistically it's more about the 4 triangle fragment (up to 4x4 px) per clock that limits it, especially with (((GameWorks))) intentionally emitting tons super-tessellated geometry at the smallest excuse.

Vega introduced a geometry pre-filter functional unit (primitive shader/discard) that was intended to double or triple the effective throughput, but they could never get it to work with standard API drivers, just custom professional rendering software that used custom APIs.
Vega-based next-gen consoles will probably perform unexpectedly well due to this, since PS/XB don't ever use vanilla graphics APIs anyway. The discard acceleration originally stems from console render engine devs like FrostByte using compute shaders for scene pre-filtering on XB1/PS4 (which each have only 2 tri/clock capacity), and Sony (and maybe MS?) asking for it to get special hardware acceleration for the upcoming gen.

+ still a 300w horsefire

Salvaging what they can from this guys back stabbing.

Attached: Raja-Koduri.jpg (1024x576, 119K)

>Plastic Korean shit
I can only wish this is a bannable offense.

If geometry were a real bottleneck they would have fixed it a long time ago. The reality is vega can push around 7 billion tris per second and other cards can push proportionately high geometry. A good looking game might have maybe a dozen million with tessellation, still enough for hundreds of fps.

Most likely gcn is just flawed and cant filled wavefronts efficiently. Which is why its poor performance extends to geometrically simple games with modern lighting

game-debate.com/games/index.php?g_id=32470&compareGPU=Deep Rock Galactic

The 480 has like 40% higher compute throughput than the 1060

Holy fuck knuckles, I can finally switch back to a vanilla kernel without the ACS patch? HOW CAN ONE COMPANY BE SO BASED?

Attached: 1548456167490.png (258x287, 42K)

>triggered.

lmao, fucking incel.

>having actual conversation instead of shitposting
waddafug lol fuckin losers

Attached: file.png (700x700, 426K)

>full FP64
>SR-IOV
Damn, that better be true.

>implying conversation between human beings can be achieved
I don't mean to offend you, that's a nice delusion you live in

Attached: 1548117727980.jpg (328x281, 80K)

>If geometry were a real bottleneck they would have fixed it a long time ago.
That would have required redesigning GCN's front end, and why would they have spent the resources to do so when Raja promised he could sidestep the issue with magic drivers (which he then never had the software development team even attempt to implement prior to Vega's hardlaunch).

The centrality of "primitive shaders" to technical discussions about Vega was precisely about whether they would allow Vega to overcome that 4 tris/clock bottleneck.

>The reality is vega can push around 7 billion tris per second and
4 tris/clock is 50% below Nvidia and isn't enough to saturate large numbers of CUs. The collapse of performance differences at 56/60/64 CUs at equal clocks shows this. The phenomena has been analyzed fairly extensively since Fury.

If you're comparing anything less than 4k with at least some form off AA please go away.

My xfire 6850 rig from fucking 2011 had 300GBps.