AMD Radeon RX 590 predictions

I'm still learning more about hardware but am interested what people think about the RX 590. The 12nm should reduce power usage and boost clock speeds right? What AMD products can switch to GDDR6 anytime soon?

Like if AMD was going to make a DDR6 video card with 12/7 nm what would make sense hardware wise? Is there any early 2019 product they could make besides 7nm compute focused cards to compete with 1070-1080?

Attached: ASUS-Radeon-RX-590-ROG-STRIX-Gaming-Graphics-Card-740x565.jpg (740x565, 54K)

Also wondering what about AMD APU and one chip products in general. Will we ever see a high end one with performance of around 1080 + r5 2600 ish? The next update should be 1h 2019 to the APU line.

The vega 56/64 already complete with the 1070/1080, I dont think the rx590 will benefit a while lot from the new process, GCN is already at its limit so are possible tweaking options. Its purpose is to make some money til navi

Yeah was just wondering if they could do a stopgap product with 12nm/7nm and GDDR6

guess not, though the 7nm GPus will be DDR6 right?

stop gap measure.
they're not putting a new series out mean new cards are close to less than a year.

Presumably, unless they decide to go with HBM again

Its just Polaris. As cool as the Polaris die is, having one of the most gorgeous layouts of any modern IC, its just a mid range performance design. The 12nm refresh is maybe going to push clocks 10% higher, maybe lower power consumption by 10%. Its still just an RX480/580 with slightly different clocks.
No one knows what their lineup for 2019 will be. They haven't said a word about the upcoming Navi arch, no one knows a thing about it aside from the code name Navi. There won't be any details until around CES next year.

APUs are made first and foremost to scale between mobile and entry level desktops/small form factor systems. They aren't made for high performance because there is no market for them. Nobody wants to buy an expensive chip with half the CPU of their other products, and a GPU thats going to be handicapped compared to a discrete card.

There is no advantage to one chip designs at all? Seems like there would be though.

So we know a 7nm Vega is coming, but are there any plans for 7nm Polaris? I'm just a poster on a Mongolian basket weaving website, but it seems to me a Polaris shrink could be quite viable.

>worse than 1070 both in performance and performance/W

RX690 when?
>the only 690 that matters

>56/64 already complete with the 1070/1080
Having twice as high TDP isn't competitive. Compared to 1070/1080 Vega cards are a generation behind, which is embarrassing since they were released a year after nVidia cards.
There's no competition in GPUs, nVidia wins everywhere. The only thing AMD is good at is not being nearly as corrupt as nVidia and working properly on Linux out of the box.

If true. This will be a stocking filler for midrange upgraders who don't want to swallow Nvidia cock.

Price?

Isn't that only instinct/ml/compute and not gaming?

...

So my hopes for GDDR6 AMD card or high performance APU is basically impossible before the 2H 2019 7nm navi launch?

Would be awesome if they could price compete with 1070-1080 using GDDR6 or APU

muh undervolt

>Isn't that only instinct/ml/compute and not gaming?
No idea, there have been shill threads popping up already, boasting about performance uplift from 12nm Vega, but one can't trust that. It will probably game better, still be advertised for compute, and get axed by its price.

Whatever bench leaks regarding it were shown, they're almost certainly fake. Scores are lower than my overclocked RX480, despite 590 having higher clocks.

>have to dick around and have an EE degree to unfuck a GPU
Kys

>have an EE degree to drag a slider to the left

I think these shill threads are made by nvidia fangays
Neither amd nor their fangays make such retarded Statements everytime

What BIOS has sliders?

>bios
the state of nvidiacucks.
you can adjust the voltage in the amd global wattman.

>BIOS editing needs an EE degree
>you need to edit BIOS to undervolt
Breh, just stop. You can use any of the number of xXxGAMERxXx "apps" to both overclock and undervolt, including the interface AMD bundles with their drivers, or use any of the number BIOS editors with a GUI. Although you can only really mess with Polaris BIOS as Vega has a signature.

The fact that even AMDrones don't make shill threads for AMD GPUs tells you everything you need to know about their performance

yeah, it's good enough that people don't need to make viral marketing threads in order to remind people every fucking hour on Jow Forums that the performance to dollar ratio is better than Nvidia
>green with envy

Mhm, show me a Linux "xxxgamer" app for AMD that has a proper GUI and is made officially by AMD or is open source.
>interface AMD bundles with their drivers
Again, no such thing outside of (((windows))).

Linux desktop has barely any marketshare and as such is irrelevant. But, I mean, if you were competent enough to set up a usable distro, you're competent enough to change one hex number to set up an offset on a Polaris BIOS.
There's also some guy on OCN, Vento20 or something like that, who's written a Polaris BIOS editor in Java, you could probably run that.

Two options
>A - they will up the CUs to 40 and have a reasonable 1440p gaming card for a very competitive price.
>B it's just a dumb refresh again and they lose face completely

I heard you can do that by editig some plaintext file in the driver package, quite the same stuff just without GOOEY

A 40 or 44 CU card on 12nm might be neat, but bandwidth is still a problem. They need a better memory controler and/or faster memory.

>lose face completely
>compared to a company that releases drivers that purposely gimp the previous generations of cards
yeah ok

Nvidia has apple tier fans.

They can start offering cards with a subscrition "hardware as a sevice" and they will happily eat that up and ask for more. People will buy anything they sell no matter what

definitely in agreement there

It's so silly how visual marketing of chips works.
>heres a fan
>this one has 2 fans
>oh heres 3

It's like watching heat spreader advertisements.

>puts more fans on
>fans are cheap shit
>heatsink is also shit

tldr no GUI solutions for AMD, therefore it's shit. If it wasn't you wouldn't need to unfuck Vega. Buying RX580/570 is a far superior option.

Not that user, but what's wrong with HBM?

It's called 580 cuz it's eqivelant to the gtx580

Honestly Vega is just fucking right.
I have one and it's fine.

OCing und fine tuining one does allow some marginal benefit, but isn't really critical in any measurable way.

Iow supply
high cost
Inferior to fuckloads of ddr5 chips
ddr6 exists

I have a 3slot 3 fan Vega 64 and it's massively cooler and quiter than my old R9 380X which was 2 slot 2 fans.

With cooling the bigger is always the better and more fans allow having airflow at lower RPM therefore lower noise.

If you had read tge post instead tldring, you'd see there is a solution with GUI. You simply want to make things more difficult for yourself and not put any effort into solving any issues, for the sole reason of complaining about it on an anonymous thread.

It depends, as long as the whining is low pitch it's fine.

Stop saying that ironically user, I'm sure there are goyvidia brainlets who would believe it

The more fans = the lower rpm = the lower the pitch

>300W
>just fucking right
Not everyone wants to use their computer case as an oven.

A bios editor is not a safe solution especially not one requiring Java and especially if it requires the user to manually edit hex values.

I still don't really understand why we dont use similar large ultra quiet cpu coolers for gpus.

>contrarian faggot uses GNU+Linux and complains about editing text files
>absolute state of GNU Jow Forums

>inferior
Is it? Why AMD even used it then? Both Vegas uses HBM? Because I'm considering Vega 56 for my 1440p gayming station.

GDDR5X prices were through the roof when they were released.

Both Vega cards are way too power hungry(300W+), and for such price it makes sense to buy the Nvidia counterparts.

Is expensive.

AMD is stuck in a perpetual state of catchup. People want the best in a market where even the best is not good enough.

>The vega 56/64
Why are AMDrones this stupid? What kind of retard is going to pay more for worse performance and worse efficiency?
Probably the kind of person who buys AMD GPUs.

Attached: 1537784224481.jpg (921x640, 100K)

>Both Vegas uses HBM? Because I'm considering Vega 56 for my 1440p gayming station
It's OK. A 56 will give you all the 1440p you want, even some 4k.
I personally use my 64 for 4k gay ming and i'm pretty satisfied.
Everything other than Kingmaker and Battletech runs just great. PF:KM and BT are broken games made by retards so it doesn't count.
>Is it?
Funnily enough vega 56 offers close performance to Vega 64 exactly because using just two HBM dies offers lower memory throughput than using 12 gb of GDDR5 (why do you think Nvidia use so much? For bandwidth)
>Why AMD even used it then?
At this point I really believe Raja Koduri sabotaged the project before running to Intel.

Vega isn't good enough for what it is. The chip is significantly larger than 1080ti but competes with a much smaller 1080, fuckload of hardware features just don't work, using HBM like that is just dumb. What the fuck HBCC is and why did they waste die space on it is unclear.

Still better than paying to get fucked over by an anti consumer corporation and support their anti competitive practices ruining the market.

Does anyone have ANY idea when this thing is suppose to launch?

I wanna build another PC soon with a R7 2700X and I would prefer not to get a RTX 2070 for a GPU.

Whatever it is It most certainly won't compete against 2070 which is rougly on the 1080 level.

If you want that kind of performance get a 1080 or Vega 64.

I hear that, I really do. But you need to understand that buying AMD in protest does nothing to actually solve the problem; you're just not helping nVidia. AMD simply does not have the resources to be competitive in the GPU and CPU market so Su decided to go all in on CPUs.
I think the issue is a cultural one. As long as the majority of first-world citizens think that nothing is wrong with the current business atmosphere then nothing will really change.

does amdgpu pro for linux support forcing anti-aliasing and anisotropic?

Except AMD owns the mid range market, and I wont have to upgrade everytime a new series comes out and my card gets gimped.

The midrange market can suck this meat and the sales figures reflect that.

Now that Zen is complete they can spend their resources incrementally updating their CPU arch and making the fucking Navi.

And BTW there were some rumors about "Project Zen" from the RTG and that "Navi - leapfrogging development SCALABILITY" roadmap picture.

The best case scenaio is that Vega was a pretty much a fixed Fury low effort filler product based on an already mature GCN while the real effort was put into developing the MCM gpu akin to Epyc.

Considering they have technological advantage in the MCM field and started developing in that direction before Nvidia they might have the real deal coming.

If they make the MCM Gpu Nvidia will get shit on worse than Intel by the Epyc.
>Small die MAD YIELDS
>dirt cheap manufactoring due to basically making just one module for different products
>whole market coverage: 1 die - low end, 2 dies - midrange, 4 dies - high end, 6 dies - a ridiculous card existing just to claim the benchmark leadership, 8 - dies for data centers and AI

AMD currently got hold of a superior technology ahead of their competition and hold a real technological advantage, now they better fucking use it.

I agree with you. Still, i will not give another dime to Nvidia ever again.

Well, I only games at 1080p and I play a ton of old games (I wait to buy GOTY edition titles on Steam when they are at $20 or lower) so I don't need that much power. But I also would prefer to get the most up to date GPU so it can last me awhile, I only upgrade when I need to AKA when my current GPU dies.
I would rather not get something 1 to 2 years old already when something new is right around the corner.

Bro, if AMD actually can compete at the high end with their signature lower prices then I'm all about that; even if power consumption is a bit higher. The problem is that right now all we can do is wait and speculate.

Before the Turing launch discounted Vega wasn't a bad deal compared to a 1080.

Right now? Unless you consider turing bullshit to be a good deal it still isn't that bad, except 1080ti exists, but it costs more.

Right now targeting the sweet spot where the most money are made would be a good choice. If 590 isn't just a refresh but an actual upscale to 40 CUs.

Hell, they could upscale it to 64 CUs and use DDR5 to make it into a Vega that actually makes sense and sell it for 400$

I'm assembling a pc now and 580 looks nice

But one thing bugs me, are radeon drivers just drivers? No shit added on like nvidia? They have few nice tools for recording etc but these also come with the cancer ecosystem

What does radon bring? Drivers, freesync and that's all?

No, there's also overclocking utility, screen recording, overlay, game profiles, etc
But it's all optional, you can instal just the driver alone.

It certainly was a bad deal. It had worse performance and cost more than hardware that was a year older than it. It is still a bad deal today because we are still in the same situation we were in when it launched. There is no good card in the $300 range because we've only regressed since Pascal. I have a friend who got 1070 for $330 shortly after they launched. Here we are two years later and the cheapest 1070 on Newegg is $385. We're paying *more* for older hardware! There's not even a shortage of supply, just an increase in demand as we all know. It would not make business sense for nVidia and vendors to pass up raising prices to take advantage of those willing to pay.

That's perfect, just wanted the drivers and few things, no mandatory marketing and logins

Cheers

>here is no good card in the $300 range because we've only regressed since Pascal.
Vega 56 costs 380$ right now on Newegg.
That is a good deal.

So my hopes for GDDR6 AMD card or high performance APU is basically impossible before the 2H 2019 7nm navi launch?

Would be awesome if they could price compete with 1070-1080 using GDDR6 or APU

Nothing new to compete with 1070-1080 until 7nm?

Is it possible to do a 12nm polaris with GDDR6 and get improvements from it?

MCM gpu sucks for gaming though right now. Maybe for compute with a monolithic for gaming in 2h 2019? That wouldnt' give them an edge though, but hopefully they can price it right and use GDDR6 instead of overpriced memory

Sure, it's possible, but the probability of them doing it is next to nothing.

The fucking Raja fucked RTG really hard.

The polaris architecture itself is good and a big success in lmid range market, but why the flying fuck they targeted ONLY the midrange with it?
They should have released 480 AND 490 back then and they would be golden. Even their naming scheme implies there was supposed to be a 490.
They shouldn't have used HBM in Vega
They should have made a powerful gaming APU to dominate the dirt-cheap market but they didn't.

So many giant fuck ups in the management it's unbelievable.
Doing any of that wasn't something AMD couldn't do but but something they chose not to do.

>MCM gpu sucks for gaming though right now.
MCM GPU doesn't exist right now.
I'm not talking about Crossfire or SLI, i'm talking about modular architecture like Epyc that works as ONE GPU.

What about undervolting and overclocking 56 one?

I love how everyone just decided to pretend that an undervolted, overclocked Vega 56 isn't faster than both a 1070 and a 1070ti.

You can basically get a Vega 56 to be ~100% as fast as an air cooled Vega 64, which means it's not a terrible value.

That said, what they really need to do is to fix the 4 triangle per clock front end geometry bottleneck in GCN. Vega at 7nm will only be faster by the percentage it clocks higher due to that bottleneck.

I don't understand why they aren't focusing on a killer APU lineup instead of making dirt shit ones that get 40 FPS in fortnite.

A midrange power level APU with r3-r5 cpu power would be a killer product for laptops.

The low end is worthless imo. They could have created a crazy good value APU if they hit midrange that would have sold like crazy with the fortnite phenomena early this year

>Vega at 7nm will only be faster by the percentage it clocks higher due to that bottleneck.

Judging by it's physical size it has more than 64 CUs and more memory dies so more badwidth

Heat density.

An underclocked RX470 is the limit of what could potentially be cooled in an APU package.

AIB Vega 56s are priced well now. Crossflash it to a Vega 64 with a higher power limit and you'll get GTX 1080 performance.

The problem with APUs is once you start gaming they kick the CPU into near max TDP and they can thermal throttle the whole thing.

Now, that's not to say they are not a viable option for laptops, I had an E545 with a Richland 32nm A10 quad-core and it would handle Borderlands 2 at 25fps, which is definitely punching above its weight. All my Source games ran at a million FPS and I gamed hard on it for a while before I built my Threadripper

I do agree, AMD needs to focus on their APU lineup they have a unique product and need to develop it further.

I doubt the 7nm Vega part is a substantial redesign given that its only ever going to be an enterprise part. It isn't worth $200 million to design a 7nm FF part thats only going to find niche sales in the enterprise/workstation space.
TSMC probably didn't have the IP blocks needed ready to offer ideal area scaling. Its probably a patched together port with a couple hardware fixes, not an entirely new design.
Adding more CUs to Vega would be pointless. The arch needs a better front end to push more triangles, and they need a better back end because their pixel rate is poor compared to the competition. Just adding more CUs when those bottlenecks still exists would be nothing but huge diminishing returns.
Spending millions hacking together an improved Vega is pointless when Navi will be their bread and butter line, different arch and driver support from Vega.

>The problem with APUs is once you start gaming they kick the CPU into near max TDP and they can thermal throttle the whole thing.

Works in the consoles, with a pretty shitty cooling too.

They could kill the cheap ass market by releasing the equivalent of a console with a 4 core Rtzen CPU.

Maybe the Microsoft told them not to do that or something.

thanks, what is the advantage of HBM for AIOs though? If the die is large enough is it possible or with 7nm?

AMD's desktop APUs have some microcode throttling behavior that you can't disable even if you set the cTDP to the highest state.
If you fully load the GPU you're going have your CPU clocks drop below the standard base clock. Its been that was since Kaveri. Its just a hold over from them being designed as laptop parts, seeing use in a desktop is an afterthought.
The console chips do have a similar behavior, but it isn't as extreme.

STFU!
I want gaming GPU from AMD!

I would think 7nm will be a godsend for the APU market. I have theorized that is precisely the reason AMD has pushed 7nm development so hard, was for their APUs. They know where their bread and butter is, and it's not gaymen, despite all the Quake Champions and Unreal 4 I like to play, from a marketing standpoint their APUs are much more profitable when they can get in bed with OEMs

i dont give a shit. AMD is doing a dumb shit with their GPU department. why can't Mommy Su unfuck this mess?

Attached: 1529567124804.jpg (757x627, 87K)

I haven't really experienced issues with TDP throttling on my Ryzen 2500u laptop, however you would be entirely accurate in saying it was an issue with Llano, as i distinctly remember being limited to base clock the moment any 3d application loaded.

Nobody needs more than RX 470 4GB

I'm an idiot on this topic: Warning

Would HBM on an APU replace the normal ram sticks you'd have to buy? Like a 7nm APU with HBM could save some on sharing the memory between cpu and gpu in the single thing right? Versus CPU with it's ram and GPU with it's ram?

I'm just wondering what are the sort of structural advantages/disadvantages of APU vs seperate

There are several advantages, but for the most part those are negated by lack of RAM bandwidth on conventional systems. Even Vega 11 on 2400G is severely crippled by having only 2-channels of DDR4 as evidenced by how it performs pretty much identically compared to Vega 8 on 2200G despite being 37.5% more powerful on paper. So you need either more RAM channels or some work around such as the eDRAM used by Intel's Iris Pro, both of which ultimately increase the total system cost.

You could have the HBM be the sole memory pool for the system, theres no issue with that.
A couple years back AMD appended the HSA spec to allow for handling two memory pools without breaking HSA compliance. So they could have HBM and standard DRAM as well. The system would be aware of where the shared page space for compute workloads was stored so nothing would break.

Adding channels brings up complexity all around, for the pin out of the package and socket, as well as the routing on the motherboard. We're probably never going to see quad channel become a standard, at least not for the foreseeable future.
DDR5 will bring a sizable uplift in bandwidth though. Aside from on package wide IO memory like HBM thats the only hope IGPs have. Big slabs of eDRAM are too hot, draw too much power, and are too expensive to ever be practical.

MCM adds some latency to it, no? People complain and whine about multigpu dropping frames because balancing the load is hard and also about Zen latencies. And these are GPUs, where a 4 ms difference in framerate can make your benchmarks look mid range at best.
On top of that, the vice president of radeon said that they've explored the idea of MCM and that it's really useful for compute, meme learning and all that stuff, but not an actual improvement for gatmen workloads. Radeon might be able to challenge nvidia in enterprise and research, two big profit segments for nvidia, but gaming GPU are probably going to be monolithic till the end of times

Even DDR5 will "only" have about double the bandwidth compared to DDR4. When it becomes mainstream, I would expect APUs to perform on about the same relative level as they do now - on par with entry-level GPUs.