Vega VII

>less than 5000 cards
>no custom cards
>AMD making almost no money on them

tweaktown.com/news/64501/amd-radeon-vii-less-5000-available-custom-cards/index.html

Attached: 2019-01-09-image-16.jpg (2400x1350, 134K)

Other urls found in this thread:

redgamingtech.com/amd-vega-based-radeon-vii-for-consumers-and-gamers-exclusive/
youtu.be/ak_1udMxgLU?t=665
twitter.com/SFWRedditGifs

tweaktown is biased towards nvidia and intel

>low supply
>no customization

That basically means this is a stopgap and Navi is right around the corner.

we already knew that

>

Attached: ahhhhhhhhhhhh.png (396x408, 249K)

Attached: 1547405897008.png (1712x284, 24K)

yeah no, look it up on Jow ForumsAMD they've been exposed several times now

Attached: nQYG097.jpg (1038x847, 271K)

guess I'm waiting for navi

Attached: 1546111597182.png (854x640, 609K)

fuck off faggot

Attached: nvidiot.jpg (1278x770, 159K)

i was want this design back

Attached: .jpg (920x718, 63K)

>look it up on Jow ForumsAMD

Attached: 1536678083431.png (1300x599, 312K)

So it's a stop gap until Navi. Cool.

Attached: images.jpg (266x190, 7K)

Good Bait

>There will be apparently 20k units available for launch and an additional 40k units made later on (likely depending on demand, according to the wording used)

redgamingtech.com/amd-vega-based-radeon-vii-for-consumers-and-gamers-exclusive/

It’s supposed to be our year bros

>redgamingtech
that guy is a massive amd shill

>Just wait(tm)!!!!!!

The cornerstone of cope for AMD

so AMD presented absolutely nothing at this yearcs ces. neat.

AYYMD IS FINISHED & BANKRUPT

AYYMDPOORFAGS CONFIRMED ON SUICIDE WATCH

DOA

so no Saphire Nitro+ model?
that sucks

>one week after ces
>original release of 5k units
>no OEM yet
>amdrones BTFOD1!!111!

Attached: rtx_on.webm (640x800, 416K)

Attached: 1503848584264.jpg (1000x1000, 253K)

Reminder that AMD always wins.

AMD doesn't give a shit. AMD hasn't even tried to make a high end GPU since the Fury.

Exactly as expected. They will make only as many as they have suitable reject enterprise chips (why sell these things for $600 or whatever it is when they can sell them for $2000 as a datacentre card)

Wait for the 7nm Vega 10 based cards.

>so AMD presented absolutely nothing at this yearcs ces. neat.
Aside from a Qualification Sample of the R5 3600 non-x at 65w TDP beating a 9900k in multicore and, by deduction, having to at least roughly tie an 8700k in single core, at 3.7gh/4.5ghz clocks, which is 100mhz better base and boost than the specs in the 'too good to be true' supposed leak from AdoredTV, you mean?

Attached: download (1).png (280x180, 20K)

navi won't change a thing. it will be polaris 2.0, fster than 2060 a bit by 3%-5% at sameish price
what will save RTG is what Ryzen did - modular scaleable chiplet GPUs. the dream.
so calm down please, let's not overhype AMD product for once.

It means this is a stopgap because Navi is delayed

q3 2019 - everything what is happening is going according to investors call at the end of last year.
why nobody paying any attention to what Lisa says and jumps to retarded conclusions from conspiracy tier lunatics that were wrong 7 times out of 10?

If AMD finally fix GCN's front end to be able to handle more than 4 triangles per clock, then Navi could easily deliver pretty significant performance increases relative to Polaris and Vega.

Basically, it would allow high CU count GCN GPUs to not bottleneck like crazy in gaming workloads, allowing much better perf/watt, and *that* would mean they could do 36/40/44/48 CUs in the mid-range without such cards revealing how bottlenecked the high-end ones are (as is a 48CU Vega would perform way too close to a 56 or 64 CU version at the same clocks).

>bottleneck like crazy in gameworks titles
fixed that for you

youtu.be/ak_1udMxgLU?t=665
oh no AMD BROS, how could this happen AMD is also trying to make profit, I though they were our special friends, who would save us form intel and nvidia!

The very same tweaktown that told us AMD was underproducing Fiji (when they made so many of these the binned non-X Fury was going for $300 before reaching EOL) losing money on Vega (when they made plenty)?
Good source.

No. 4 triangles per clock is not enough for a GCN based GPU to saturate the rest of its rendering pipeline in most games, because most games are geometry heavy.

This is the same bottleneck high CU count GCN cards have had since Fury, and which Raja tried to hand-adjusted away with vaporware magic drivers that were never actually developed.

Navi isn't a Tonga shrink like Polaris was.
>what will save RTG is what Ryzen did - modular scaleable chiplet GPUs. the dream.
Just wait 7-10 years, right?
It's a stopgap because they have some Vega20 to sell you.

>Just wait 7-10 years, right?
worked with Zen.

Zen is a CPU core, and ~5 years are the usual R&D cycle length there.
MCM GPUs are just impossible for gaming anyway.

Huh? How many times are we going to tell you to stop expecting high end shit from AMD? They are going mid range dies for the foreseeable future. It's going to be exciting whenever they make the GPU a chiplet and kill the entry level market with it.

>I have no idea what I'm talking about
Wang himself said it's not feasable to do this for consumers due to the nature of games not taking advantage of multi GPUs, a.k.a. no one knows how to code true Vulkan/DX12. They will definitely do it for the enterprise, but not anytime soon for the average consumer.

from that interview you are talking about
>So, is it possible to make an MCM design invisible to a game developer so they can address it as a single GPU without expensive recoding?
>“Anything’s possible…” says Wang.
it won't happen on Navi but on their next-gen 7nm+ architecture we might see it

those will be the new midrange radeons you dindu

If they can make then it's pretty much a win for everyone. Scaling up from an efficient, small (and cute) GPU all the way to a behemoth monstrosity is the ideal. They could do it like Zen2 and reuse the ones with hit spots, just put two together and make the desired CUs as they wish.

Amd released a basically cut down version of the enterprise card, they don’t expect to profit off them...

Those, what? They won't be making a Vega 10 on 7nm. Shrinking to a new node still costs money, money RTG would love to have. Navi will be midrange dies. Moreover, expect the strongest die to be at most Radeon 7 parity when it comes to average gaymes. We'll start to see AMD compete again in the high end whenever they figure out MCM so that the average potato programmer can code for it.

Of course. Better to sell off at a loss rather than throwing it altogether.

It is a pretty slow start to get Windows support for MCM.

I also don't foresee Navi being any stronger than the Vega VII, but I also don't think we'll be getting a Navi that strong any time soon. Whatever is going in the PS5 will probably fall roughly between a Vega 56 and 64 and I'd expect the first generation of cards to be +/-20% from that target.

mid-range Navi is obviously coming first, but I think we might see a monolithic Navi die about a year from now. Radeon VII die size is 331mm^2, Vega 64 was 487mm^2. A high-end Navi part at the size of the Vega 64 die or above with GDDR6 would probably pack quite the punch for a cheaper price. AMD needs to get GPU chiplets appearing as one GPU otherwise Nvidia's 7nm cards are going to tear them a new asshole

Computer is obese.

Those vega10 based 7nm will start coming out this year as a midrange cards

user is obese.

Well those are just MI50 cards they were unable to sell, so they gimped them, repackaged and pushed to the general public. They had to show SOMETHING on CES or people would lynch them lol.

soon brother

b-b-but vega wuz good!!

nice try jim

i have good sources on this

>AMD
>making money
those things are mutually exclusive nigga

mcm = more or less SLI/crossfire, and we all know how those performed

Hyping AMD GPUs is a fool's errand. Unless they reveal MCM, they're always going to be second place. That being said, give me SR-IOV on the 7 and I will jump ship easy.

is mcm really a viable option? wasn't "multi chip gpus" a thing of the past (i. e gtx 690/590 and crossfire/sli)

They're making money for many quarters in a row.

Or they will just make a better architecture.
Sounds insane, right?

>Unless they reveal MCM, they're always going to be second place.
I don't know why I waste my time trying to explain the same shit over and over again, since apparently no one listens.

The problem with GCN for gaming is that GCN is presently designed to have four shader engines, limiting it to processing four triangles per clock at the front end, which bottlenecks the shit out of high CU count GCN cards in gaming.

This was the problem with Fury, it is the same problem with Vega 56/64 and is still the same problem with Radeon 7.

AMD does not need MCM gpus to fix that problem, it needs to spend the the time and money to revise GCN's front end to handle more than four damned triangles per clock so that high-end GCN cards can actually fully saturate their rendering pipelines.

If you don't believe me, feel free to go chart the performance of Fury, Vega and Radeon 7 (once it's in the hands of reviewers) against the sustained boost clocks those cards can achieve and you will see that their relative performance scales almost exactly with their relative sustained clocks, because the problem is a per-clock front-end bottleneck.

AMD should divide cpu and gpu divisions like they did with manufacturing.

its bretty much confirmed GPU chiplets wont be a thing for a long time
if theres R&D at all its likely a side project

It's weird they get so little synergy out of it. You'd think they'd dominate the high-end laptop market with beefy iGPUs, especially considering how terrible Intel's are, but it's never happened.

MCM is not like SLI/Crossfire. The problem was that they had to mirror memory pools. MCM would work similarly to how it does with a processor. You don't have pcie adding huge latency to die communication either.

user, that's not the point. We all know GCN need to be reworked or just scrapped altogether. The MCM thing is for them to be able to scale without risking making too many chips, i.e. a mobile version, a small entry level desktop version, a mid range, a high end, enterprise, etc. With MCM, they can just make a very good mobile version and check out all of the desktop market. Zen2 and beyond possibly accepting GPU chiplets make it even better for them. Of course, this is assuming they can pull it off seamlessly for nu-programmers.

Why would you think that? Mobile is all about performance/power and AMD suck at that.

GCN isn't designed to have 4SEs, it can have whatever, it's just engineering and load balancing algos.
The very concept of SE is a problem, it dates back to R600 and isn't exactly scalable.
Beefy iGPU needs bandwidth, also what's the point?
You can have discrete graphics in laptops.
Dumbass.

>tweaktown
didnt he said literally the same shit about radeon pro ssg? that amd wont do literally no money from it despite being at almost 10k pricetag?

seems like he doesnt now how to shitpost anymore

>kills nvidia
nothing personnel kid

Attached: gpusin2019.png (960x273, 128K)

inshallah

>Demos actual 7nm chip and it destroys i9-9900k using 50 less watts
>Intel just holds up 10nm chip and says "it works we promise"

>(why sell these things for $600 or whatever it is when they can sell them for $2000 as a datacentre card)
yea i agree i feel like it was totally mismarketed...
could be pushed as best VR card for the money (16gb) best deep learning card for the money, best mining card for the money etc...anything but gayming... where nvidiots will say "OMFG 300WATT HOUSEFIRE" no matter how good the performance is...

>v2 rocket die size is 331mm^2
>rtx 2080 is 525mm^2
>same performance
imagine if vega 7nm got a 525mm^2 die lol

is this shopped???

>imagine if vega 7nm got a 525mm^2 die lol
500W is the limit, yeah.

If this thing really is the best VR card then it'll be very appealing. Does it have the one-cable USB-C VR output?

True MCM GPU would work like a CPUs do now. Several chiplets connected via a short, high frequency bus to a memory controller that links everything to a single memory pool.

For GPU this would solve the issue of balooning chip sizes. You could make a 1500sqmm chip made out of 6 250mm sqmm chiplets and that would still be seen in software as a monolithic GPU.

What they were doing years ago was crossfire on a single card.

Obviously it'd have to be like the Titan V where it's got so many shaders that they have to lower the clockspeed. Probably won't happen until 7nm isn't so expensive though.

>True MCM GPU would work like a CPUs do now. Several chiplets connected via a short, high frequency bus to a memory controller that links everything to a single memory pool.
Actually wide and slow, and these would be duplicate chiplets with separate non-uniform memory pools.
It won't perform well either.

Retarded idea. Mainstream goes into the direction of a CPU/GPU in a single chip. Why do you think Intel is making a GPU or Novidya is shitting its pants, diversifying and investing into meme technologies?

I have a feeling they're doing this to get rid of stock.

Nooo. They truly made a full-length card. 2kW PSU required.

The real question is what did AMD mean by releasing this card on the five year anniversary of Google Schemer being shut down?
Is this anti-Semitic?

Attached: Google Schemer shutting down 7-feb-2014.png (1366x768, 111K)

It's actually super weird to think that AMD hasn't really competed in high end since like Fury or the 290 days. Hopefully Navi turns out well and isn't based on GCN. That architecture has got to go.

GCN is their SIMD ISA.

I don't think we'll see something competitive from them in the high end for a while. Their next midrange series should be compelling. I'm looking for something reasonably priced to replace my RX 580 that works well with Linux.

>recently bought a Vega64
>thought it was a mistake so short before CES
Was a good choice after all, I guess. Great card, too, once undervolted and overclocked.

tyty

>MCM would work similarly to how it does with a processor.
It's not exactly seamless there either, which is why Ryzen suffered from poor performance in some applications (games, emulators) and to some extent still does.

I guess this happened because AMD needed a gaming card to keep investors hanging until Navi is ready.
Though the could have ungimped the fp64 and just fuse off a stack of the memory, allowing them to snipe at what the Titan was meant to be.