Can somebody explain

Why AMD can compete with Intel CPUs, but still struggles to compete with Nvidia GPUs

Attached: AyyMD.png (600x509, 81K)

Other urls found in this thread:

youtube.com/watch?v=uN7i1bViOkU
youtube.com/watch?v=0dEyLoH3eTA
youtube.com/watch?v=0CSby1zFkDw
anandtech.com/show/9815/amd-moves-pre-gcn-gpus-to-legacy
twitter.com/SFWRedditImages

What are you talking about?
AMD is literally the only viable option when it comes to discrete GPUs.
I literally have 5 AMD GPUs, and 1 Nvidia GPU (which I didn't even pay for).

Money, dude. They need a big budget to compete with Nvidia, because GPUs today are half-software, nobody needs a raw power without programs. Nvidia got it, AMD didn't.

they were able to compete with them for awhile, its only recently that they've fallen behind.

AMD never poured the money into manipulating the market that nV did. By the time their CPUs started to fall behind Intel, they were still in the lead in the GPU market, but without the CPU division to pick up the slack with money, they literally couldn’t afford to continue to make better GPU than nV.

>a while

They consistently outperformed nVidia for a decade. NVidia just managed to manipulate the market better. AMD just got royally fucked by anticompetitive behavior from two companies.

>Why AMD can compete with Intel CPUs
Uhh what?

Attached: untitled-28.png (711x668, 25K)

Why is threadripper never on these cards

Shoo.

Comparisons I mean.

To be honest, AMD does compete with Nvidia cards very well in my opinion. Just because they don't compete at the high end (xx80/xx080ti) doesn't mean they aren't present in the market. You will find that at the mid-range price point, the AMD option will always be the better buy. This mentality of that the high end of the market is an indicator of quality is all fluff. I'd wager that over half of the market pay for hardware they don't even utilize, which is where this myth is even created.

>le space heater meme
>le le meme

Take a quick history lesson.
youtube.com/watch?v=uN7i1bViOkU
youtube.com/watch?v=0dEyLoH3eTA

I want what you're smoking

>NVDA market cap: 126.37B
>INTC market cap: 204.31B
>AMD market cap: 18.79B
it's pretty obvious lad

Has anyone done any deep diving into exactly why it is that Vega doesn't compare more favorably with nVidia hardware, despite it having more raw compute horsepower and very comparable memory bandwidth? Is it hardware or drivers or gayme optimization, or all three?

It is still GCN which doesn't scale well.

>AMD can compete with Intel CPUs

m8

>red company bad pssstffftssstfff

Attached: red_is_bad.jpg (426x510, 46K)

if nvidia is so great.. why dont they get into the cpu businesses? seems like an easy win

>lalalalala if I dont open my eyes I wont know intel is losing ground!

They've tried and failed with ARM and no one will give them an x86 license

/thread

"Scale" in what way? It's not like the fundamentals of SIMD is somehow different for GCN than for nVidia.

People automatically assume that the company that makes the very best card also has the best cards in every price range. Nvidia only wins in the $500+ price range. Which is only 2 or 3% of the market.

pretty much.

Because games optimized for nvidia hardware first. It's not even about gayworks, but about utilizing strong points of nvidia like geometry and underutilizing strong gcn points like shaders and computing. Look at dirt4, it's heavy on shaders and radeons really fucking competitive with nvidia. Or ashes of singularity, where the only way for nvidia to compete was cheating by simply skipping heaviest shaders specifically in benchmark until they were called out some time later. Playing on radeons in 80% of pc games are like trying to fit a cube into a round hole.

amd is incompetent, that's simply it. They occasionally make great hardware, but they still hadn't realized that gpu is a software+hardware business, and not just bare hardware.

Ignore any answer in this thread that doesn't involve talking about GCN's architectural limitations, as those people are just throwing unfounded opinions at the wall.

The real answer is that AMD needed to have the financial resources and the leadership at the RTG to be able to do the heavy lifting of redesigning GCN to be able to handle more than 4 triangles per clock in the front end of its rendering pipeline, and that it is this triangle-per-clock limitation that holds back the performance of their high end GPUs in gaming.

Once AMD redesigns GCN to overcome that limitation, their high end cards will suddenly gain a lot of performance because they will finally be able to saturate the rendering pipeline.

Raja delayed this reconning with the architectural limitations of GCN by lying his ass off that he could solve the problem with magic drivers, when in reality he never made any attempt to have the software features that would have been necessary actually implemented in any way shape or form in the gaming drivers.

See Due to the 4 triangle per clock front end geometry bottleneck, GCN based GPUs scale horribly beyond about 48 CUs. If you clocked a Vega 48, Vega 56, and Vega 64 at the same clocks there would be virtually zero performance difference due to that bottleneck.

RX 580 > GTX 1060
Vega 56 > GTX 1070 Ti

they have a few cards that compete. the issue is that all their other cards are shit.

nVidia is a lot more anti-competitive and corrupted than Intel

>the latest Intel CPU barely outperforms a months old Ryzen
I can already see you crying when AMD moves to 7nm and gets an over 10% performance increase

It's probably because the i9-9900k is considered a "consumer" CPU and the Ryzen 7 2700k is the highest level of "consumer" CPU AMD has. If you compare current prices, though, then you'll find that the i9-9900K+motherboard is priced at the same level as Threadripper 1950X+motherboard (The 1950X itself is cheaper). The threadripper is in a class above mainstream consumer CPUs, though. Still, I agree, if you're going to do a comparison then it's fare to include CPUs at the same price-points.

Intel sat on their ass and made almost no progress for nearly a decade until Ryzen lit a fire under their seat. Their 10nm process is fucked too. NVIDIA has been improving constantly.

Generally speaking you can expect Threadripper to perform worse than a 2700X in games, the multi-die fuckery and latency it introduces along with memory allocation and scheduling on a NUMA CPU will generally make it perform worse, since games do not need 12-32C.

>Why AMD can compete with Intel CPUs, but still struggles to compete with Nvidia GPUs
One important point nobody in this thread has mentioned so far is that Intel, unlike both AMD and NVidia, is a chip producer. They have their own fabs and make their own chips. Intel competes with TSMC and Samsung (and GF) in that area, not AMD (or NVidia). Intel can't into 10nm and TSMC is doing volume production on 7mm already so AMD get a free advantage on the CPU side. They don't have this on the GPU side since both NVidia and AMD use the same TSMC fabs to make their GPUs.

I don't think I'll ever go to AMD cards/cpus unless I want a cheap Centos 7 server running VMs. For gaming I will always use Nvidia/Intel. The architecture is completely better than AMD stuff.

>Intel
>House on fire

I still like their chips.

m8 I thought Vega and Polaris were both made by GloFo

Because Nvidia has been actively working to attain a market monopoly. The 8800GT was so good when it launched that it allowed them to pursue this aggressively and without fear, because their newfound customer brand loyalty combined with every dirty trick they've pulled in their quest to be the only name in the game meant that going forward, Nvidia would outsell AMD no matter what. Even amazing Radeon GPUs vs trash Nvidia GPUs, as was the case with the Radeon 5000 series vs Fermi, Nvidia would still win out.

Their next play was to take advantage of an underwhelming AMD launch (Radeon 7000, GCN 1.0) to shift their product naming stack, so they could start charging flagship prices for their midrange GPUs and start charging $1000+ for their flagships, vastly increasing their profit margins.

In the end, Nvidia made money hand over fist, and Radeon Technologies Group didn't. And of course, you need money to develop competitive products. Then Raja Pooduri left AMD high and dry. So now AMD is left to focus on making money, which is something that Zen is doing very well for them so far. And Vega is actually making them some money too, because AMD struggles against Nvidia in consumer gaming graphics, but is actually very capable at datacenter tasks, which is a much more profitable market. Vega was made for the datacenter first, which is why it sucks at gaming. Again, they have to focus on profits.

And they especially have to focus on profits now, since Nvidia will be making more money than ever now that they've taken their already artificially expensive GPUs and made the newest ones even more expensive, yet they're still selling like crazy. Nvidia wanted the market for themselves, and the general public gave it to them, because Nvidia did everything they could to make people give it to them.

That's why Radeon can't compete with Geforce.

AMD was always ahead of Intel CPUs since the Athlon days, only up until Bulldozer.

It's not surprising they came back ahead of them. It just took a long time because a new architecture takes a long time, especially when your company doesn't have money due to illegal anti-competitive practices by your competitor.

The reason they can't compete with Nvidia on GPUs is because even when they had a massive advantage with the 3000, 4000, 5000, 6000, 7000, and 200 series, they still at best had a 53% marketshare so had to refocus on saving money instead of spending a lot on R&D. There's a lot more fanboyism toward Nvidia with GPUs compared to CPUs.

>HD5000
>good

Don't kid yourself, faggot

5000/6000 series have aged poorly compared to Fermi

Fermi buyers were smart while HD5000/6000 cucks got dumped on by AYYMD

youtube.com/watch?v=0CSby1zFkDw

anandtech.com/show/9815/amd-moves-pre-gcn-gpus-to-legacy

polaris yes, vega not anymore. vega went to tsmc for 7nm process.

>5870
>not 5970

5870 was the flagship. Radeon 6000 series was when x970 cards became the single-GPU flagships. Before that, the 5970 was a dual-GPU card.

Ryzen came out for CPUs, but has yet to come out for GPUs.

Wait for Navi(tm).

Are you saying that the whole chip has a single primitive rasterizer, regardless of the number of CUs? That seems pretty stupid tbqh.

$379 flagship, sure thing. Amd just went with small chips that generation with 5970 having roughly same power consumption as gtx480.

Why are you comparing to Fermi when they came out earlier?
Not only that, but those GPUs ARE better for older games than Fermi as well.

>have aged poorly
Who gives a shit? They were amazing cards when they came out compared to the overheating disaster that was Fermi. By the time they were moved to legacy status they were way outdated anyway.

>Why AMD can compete with Intel CPU
????
last time i checked intel still sells the fastest cpu on the market

Attached: 1520752304691.jpg (999x1065, 220K)

although it does seem Nvidia will be doing the same now that they got a big lead

10% won't help against 39 Frames tho

the most too

At double the price

They do compete, they just don't target the low revenue xx80Ti top high end segment of the market. Anything 1080 and blow they have competition for, anything compute they have great cards for.

This was the first non-autistic post in the whole thread.

By that metric, noone can ever "compete", since there's only one strictly fastest CPU on the market at any one time.

>not wanting your own personal nuclear reactor
amdpleb

JUST WAIT FOR ICE LAKE
SEKRIT SAUCE CACHE IS HERE
WYPYPO BTFO

Attached: 1540561414496.png (712x794, 48K)

>8 core vs 32 core on a multi core test
gee wizz i wonder why amd tops the chart

rekt

>not POWER9
>"fastest in the market"

Attached: 1512894351281.jpg (429x410, 37K)

and are the answers you seek.

400€ 1920X beats the 650€ 9900K
gee wizz

It's a pretty complicated issue
Tldr is amds gpus are front end bottlenecked in hardware meaning the (gpu) pipeline chokes when there's too much load on it
Basically the card loses 25% performance unless the game is spesifiically optimised for Gcn cards (7970-vega 64)
What's puzzling is most console ports are pretty badly optimised for pc builds VS the xbox or ps4 version
Idk if it's the dx11/12 or a deeper api issue on Windows but all amd gpus run like absolute poo on most titles with those apis
Some ogl games run okay but it's mainly shit.
Vulkan however is closer to what we see on consoles optimisation level wise so going forward games that supported heavy compute and async highly parallelled multi threaded stuff runs amazing on Vega but that's only 1 or two games right now sniper elite 4 and doom 2016
Hopefully now that new consoles and alot of new game engines are going to be bending over backwards to support the gen 9 console stuff with navi (basically Vega on roids) and Ryzen coming out 2020+ things should look up.
Can't say the same about rtx which is a complete joke and I say that as I shitpost from my gtx 1080 Ryzen pc
I had a 56 oc but it was a absolute dog in most games so I swapped back and sold it on
Many games just simply don't port well to Windows under amd gpu with current lazy fuck devs
Who knows maybe in a few years amd rtg will unfuck itself but Tbqh Vegas ain't bad they just needed to be cheaper and almost as fast the mining boom ruined it not raja
As for the whole prim shaders debacle they probably could not get it working in software/hardware so it was scrapped
Nvidia did the same thing with their fermi cards being very compute heavy they ripped the hardware scheduler off and did it in software with little power or performance disadvantage
Sadly amd gpus won't be good until mcm dual quad chiplet designs come out with infinity fabric on 7nm or less
Nvidia basically also handed in the towel a month after their flop with no games to be and cont

Attached: 761AFD03153548F1BBCBC0C5608320DD.jpg (550x366, 28K)

either amd invest on a ecosystem like gameworks and shove money on devs to built around it and slap like 30% async like they do on the consoles
or built a new uarch that can do pretty much everything without the need of the software which again it will be a housefire because amd give up on the hardware sc at all

buys 9900k
stays at 1080p

i mean the only reason those benchmarks exists is because they make intel look good and nothing more

/thread

Raytracing in general is fucked it's 10 years of of date pixar has been using path tracing (sucessor to ray tracing) for over a decade now and not single pass denoised (((real time))) crap that nvidia is currently pushing
Basically both architectures are fucked turing is a mess and slower per die size and watts at games than pascal while also being hideously expensive and underperformed even at the 2080 Ti level 4k still isn't realistically possible at the mainstream level despite being around for 5+ years it's a absolute cluster fuck on all fronts.
Vega at least is cheap issues aside it's decent much better than fury and dogshit Hawaii xt 390 refresh as well as poolaris 2
Gpus are in bad space right now either you go amd and put up with patchy performance or nvidia and get absolutely ripped off with a bunch of slow stupid cards with features now games will ever fucking use.
RTX is just game works next generation nobody in their right mind will bother with this shit till real time light path tracing at decent sample counts is a thing and only in the midrange down.
So with fabrication leaps slowing down and multi core modules gpus becoming a thing like it did with Ryzen I can see realistically amd unfucking itself around 2021 in the gpu rtg pc side of things and nvidia launching 7nm mcm stuff of their own around then as well as they can't keep going bigger and hotter dies like Intel did with its 14nm++++ Monolithic crap it's dead.

Oh but I'm not done yet

So basically tldr:
Amd 2020 7nm mcm asskick
Nvidia 2020 7nm overpriced Intel tier junk since they won't drop rtx
Intel dead 2018+
At this stage gpus are like x86 nobody wants to ditch rasterization in favour of tech that doesn't work (ayyymd has had real time Gaytraced stuff since 2016 it makes no sense for gaming)

OK now Tldr over if you made it this far I'll sum it up
Shit won't get better pascal was the i7 9xx series of the gpu world we won't see anything that good again till Ryzen on gpus 3+ years

Check again. 2990WX is the fastest CPU on the market. Intel isn't even attempting to compete with it, because they can't.

They did remember that 28 core 5ghz 3kw monster? They got so utterly fucked by a CPU that uses 1/10th of the price power and cooling they roped that off the showroom floor at ces and we never saw it again

Power9 is hardly the fastest. It keeps up with Xeons and Epycs in some tests. But on the whole it isn’t nearly as fast. A lot of this is down to compiler optimizations though.

Threadripper draws less power.

So Raja betrayed AMD?
And can it still be done on software level?
Could we possibly see significant Vega performance increase in future?

Hey guys check out my new company badge. You enjoying your cut down RTX GPUs? You can thank me later for that increase in price.

You didn't buy in to that fine wine I was selling on vega did you? I made the GPU equivalent of bulldozer with that pile of shit. I hope you enjoyed your spicy ice cream shits after my hype show a couple years back.

Can't wait to show you what Intel has me working on next! If you thought Larrabee was the pinnacle of Intel GPUs, you ain't seen nothing yet! Five times the power consumption and five times the compute power of the latest Quaddro card. All at five times the price. Just don't ask us about driver support. We are still part of Intel graphics division after all.

You're welcome for the shit show of the current GPU market. I regret nothing you little shits.

Attached: Raja-Koduri-Intel.jpg (602x430, 104K)

Nvidia cuts corners. Which is ok. Their memory compression and hardware culling are generations ahead of what AMD has to offer. Also, AMD has more paralelism than Nvidia and only Vulkan which isn't much used can harness the power of Vega efficiently. DX12 still isnt fully available for use due to Shader Model 6 not being yet implemented.

The real problem is that the majority of GPU consumers will buy Nvidia no matter what. AMD could fix their geometry bottleneck and produce an absolute beast of a card and it wouldn't make much of an impact.

Don't care pc and laptop sales and slipping compared to mobile and console let it die and nvidia and Intel can hang onto their echo chamber of dung hill

>m8
>>>/jidf/

Because they should have split GPU compute from gayming since 290x/Fury.

>unironically considering console gaming
kys yourself

Attached: image.jpg (400x400, 151K)

Pixar started using denoise in Finding Dory. Optix denoiser is available in a few production renderers. Turing also shits all over Pascal in rendering. Go look into how hard the 2080ti rapes the P6000 in Redhift. If I could find a 2080ti I woild gladly buy 2. If Vega worked in Redshift I would gladly buy 4.

>KILL YOURSELF YOURSELF
Nah I suffered gen 4 5 6 7 8
Gen 7 started to get bad with 480p 30fps garbage and current gen is just as bad 684p on xboner lol
I think I'll just wait for more games my pc will easily last until 202x+
They can't afford to pull a new architecture out of their ass if nobody besides console makers are buying it.
Radeon on pc is dead rtg is literally in the shitter and I say this as someone who owned every single amd/ati card except the fury that was garbage BTW.
If navi on 7nm or mcm goes ahead that could be interesting you'd get 1080ti level perf between two chiplets at the same price or less or 2080ti titan v tier with quad core mcm but that's at least 2 years away and no games really require good gpus anymore anything more than a midrange rx480 gtx 1080 is a absolute waste since everyone is targeting high fps for gaysync gayming

>Why AMD can compete with Intel CPUs, but still struggles to compete with Nvidia GPUs

CPUs and GPUs are very different designs
if you really didn't know that, GTFO

I am exclusively talking about rtx and Gcn in games not faggot render farm shit that can always be brute forced.
Desktop gpus don't have that option as sli/crossfire never scaled well it's either single fast card or nothing.
Denoising is fine if you have a decent sample count to work off not 2 or 3spp shit like rtx

Actually depending on how navi and mcm stuff goes we could see very similar chiplet xpoint tier stuff in gpus as well just like Ryzens infinity fabric
I wonder how hbm3+ and a multi core gpu would work on the same die it would be a 4k 240hz 8k 60 monster if they get 1080ti x 2 speeds out of it on 7nm

They don't really have the resources to do both at the same time.

What makes me sad is, I love the hardware evolution part of it. The games, however...
I just want Crysis 4 or something, so I can just cry about not being able to run it on high.
And to be honest, I played Crysis recently, and most of today's productions just look like shit against it.

intel 8 cores draw more power than 16 threadripper cores WHILE PERFORMING WORSE LOL

>RX 580 > GTX 1060
>Vega 56 > GTX 1070 Ti
ever so slightly
just like other segments

Also, it's available for sale.

Besides I don't want them too Vega was the Swan song for rtg on pc Tbqh you can't bounce back from that nobody bothered to buy it or fury or polaris
We really haven't seen any graphical improvements since crysis 3 5 years ago or crysis 1 tech demo from 2006
Until gpu power goes to 20gflop+ it's not happening then you still have the limitations of a traditional raster engine and then the limits of rtx which can't path trace (yet) and raytracing is still extremely slow 1080p 60fps at best and needs to be partially rendered using the rasterized sizzle anyway because nobody is stupid enough to make a video card in the next decade that can only do path/raytrace/compute just for gaming although it's definitely possible I'd like to see what a rtx would look like with just rt cores on 7nm taking up all of the die and have a separate core/corelet doing raster and another one for (ai) could be interesting but the days of cool shit like that are long gone.
Sgi was doing this back in the 90s and 00s in real time as well this is nothing new but marketing it to gamers is. It's been attempted before but literally needed a server to run now it's shrunk down to a single (underpowered) gpu in under a decade.
Realistically we won't see a big jump in real time cgi till we get rtx and amds stuff off the ground and in actual consoles gaming pcs and mobile phones (which I think some had the capability to raytrace stuff power vr etc)
As a 1080 oc owner it's over kill 4k is a pipe dream this gen its been a long time money pit since 2012

Reminder that 7nm Threadripper is going to be fucking ridiculous on performance and efficiency

AMD operates on a roadmap cycle. They put all their attention to each of their sectors during a specific time. Currently they're focusing their manpower and money on improving their CPU architecture. Soon the GPUs will follow while the CPU sector stays dormant.

Sadly that does seem to be the case with Turing, yes. The price increase NVIDIA pulled out of their ass is even worse than what Intel did in the pre-Ryzen era, actually.

I dont understand why, but i had both a GTX1060 and a vega56 connected to the same screen, and the vega looked significantly better with deeper colours.

Crysis 3 is still dropping below 60fps on a 2080Ti at 4K.
Inb4 4K meme, Crysis might be the game that makes it the most obvious how 4K helps. Mostly because of all the vegetation transparent textures and shit. No amount of antialiasing will help, I guess.
I was fucking amazed the first time I played it @4K. My 10 years older brother that was besides me couldn't believe that was the same game he saw me playing in 800x600, back in the days.
Yes, they did set the bar that fucking high back then (10 fucking years). Why aren't we getting games that will look better on future hardware anymore.

different default color profile settings.

amd drivers uses deeper contrast while nvidia blooms it up.