Yep we will give 16gb which nobody needs

>Yep we will give 16gb which nobody needs
>Yep it still draws more energy even with node shrink compared to competition
>Yep we finally caught up to 3 year old card and priced it $700
>Yep this is definitely not a Mi60
What went so wrong?

Attached: vegaVII.jpg (678x402, 33K)

Other urls found in this thread:

nvidia.com/en-us/gtc/
newegg.com/Product/Product.aspx?Item=N82E16814487415
newegg.com/Product/Product.aspx?Item=N82E16814932086
newegg.com/Product/Product.aspx?Item=N82E16814126261
newegg.com/Product/Product.aspx?Item=N82E16814133756
twitter.com/NSFWRedditVideo

Remember that navi card everyone was shilling, vega64+25% at less than $300 was fake.
That card is actually vega7 and costs same as overpriced RTX cards.

Attached: 1538049892545.jpg (627x421, 34K)

From what I heard around the Fury X release and Vega, GCN is actually really annoying to scale up to higher numbers of CUs.
The main way GPUs get faster with node shrinks is by finding the perf/watt sweet spot frequency and just increasing the transistor count until cost or performance targets are met. If 7nm uses 50% less power at the same performance level of 14nm, that means you can increase the transistor count by 2x and have the same power consumption effectively doubling performance.
AMD seems to have taken the route of relying solely on 7nm's performance improvement and used the same 64CU vega design as 14nm Vega, disabled a CUs to improve yields and Pushed the clocks up by 30ish%
It's not a terrible plan since 7nm is still new and very expensive, and this way the die size will be significantly smaller, but it leaves performance/watt in a pretty sorry state.
It does show that 7nmHPC is a pretty good node though, i bet nvidia will absolutely destroy when they use it correctly.

What a trainwreck. The only use case I can think of is a decent compute/mining card which doubles as an OK gaming card.

How much will the 8gb model be when it is inevitably announced

>Yep we will give 16gb which nobody needs
RE2 will take it all on ultra.

The nvidea fags are just pissed they get constantly dicked over on RAM.

Navi is far away so they just put a great compute card out there for gaymers. It's okay for selling off subpar leftover dies.
4 Stacks of HBM are required for 1TB/s of bandwidth do that's why we end up with 16GB of HBM.
As far as I know there aren't any 2GB HBM stacks, so the only way to get 1TB/s of bandwith is to put 4x4GB stacks on the card resulting in 16GB.
Vega10 was 2x4GB with slower speeds, providing under 500GB/s of bandwidth.
Be glad it's that way, 8GB would've been underwhelming.

>What went so wrong?

Navi got delayed and they had to shit out something.

>they didn't show it so It must be fake

I bet you think 16 core ryzens are fake too since they didn't show it lmao.

I wouldn't be surprised if they save 16 core for 7nm+

How can you be so naive that AMD can pull vega 7 performance with less than $300 in a year or less just because it is a new architecture.
This card is navi performance and it costs $700.
The custom chip that they are making for ps5/xbox will not come out untill end of next year and both sony and MS paid for it. They won't give you that performance this year or even next year at such a bargain.

>m-m-muh delay
Don't tell me you believe everything the clickbaiters put out on youtube.

It's clear that R7 is just MI60 re brand, hopefully NAVI is still coming out.

You do realize Navi is an actual thing right?

Attached: AMD-GPU-roadmap-900x507.jpg (900x507, 46K)

Because they have a node shrink. This Vega 7 is way smaller then a 2080 and still matches it. The problem is the Vega architecture is cucked by It's need for HBM2 memory so they can't sell It for cheap. The whole point of Navi is to fix that.

The way Navi is sitting in the second half of where 2019 would be is moderately concerning.

>Yep we will give 16gb which nobody needs
Gaymer faggots and shitcoin aren't everyone.
You need that much memory for
1.) Scientific computing
2.) Machine learning

*shitcoin miners*

And I feel really happy for the scientists and machine overlords but meanwhile Nvidia is off making a million billion dollars

And most libraries for both of those tasks use CUDA making it a no sell.

It doesn't match 2080

What went wrong? Trying to imitate nVidia on the Titan series. Only people who use them for both gaming and work/productivity would buy a Titan/Vega7.

They are just Quadro and Instinct cards being marketed to gamers and lower budget productivity users. It worked for nVidia and it will work for AMD.

Attached: 1547193835805.jpg (970x647, 99K)

Im gonna buy this when zen2 releases. Will be a nice upgrade from my vega56. 144+ 1080p maxed incoming!

They're selling you a $2-3k enterprise card for $700 and you're complaining?

I've got an rx 580 from when I first built my PC 18ish months ago and was poor. Is it worth it to upgrade to a rtx 2070 or vega? My monster is freesync so I hope nvidia will support it like they said.

>New architecture isn't ready yet, keeps releasing rehashes of current arch until it is done.
Reminds me of a certain competitor

GCN
the difference is AMD doesnt have the money to do otherwise

get the vega 7, its as fast as the 2080

why are you lying?

Get vega64 only if it is comparatively a lot cheaper.

>stock drops by half

Yes but literally nobody expected navi to be this early until clickbaiters started clickbaiting.

Vega 7 actually looks cool, it will be very difficult for partners to improve.

Are they releasing a cheaper 8GB alternative? All I want is a card to replace pic related that gives me 144fps in new games at 1080p. I mostly play fps anyway so I don't care much for higher resolutions beyond 1080p.

Attached: sapphire 290.jpg (1000x999, 105K)

nvidia.com/en-us/gtc/

Nvidia next-gen 7nm GPU microarchitecture will launch at GTC 2019 in March, CUDA Compute Capability 8.0 versus 7.0 for Volta and 7.5 for Turing

It will succeed Volta for the HPC server market

that faggot who "predicted" AMD at ces and tried to say navi was gonna be there just made the same prediction as everyone else. zen 2 being 16 core is just natural, AMD themselves said they were planning to compete with intel 10nm, not a coffee lake refresh.


But you know he made it up because adored literally claimed that they were going to soft launch at CES. And that the 16 core was going to get a special release in may. But whoops sike no consumer zen 2 until may at the soonest.


The leaks were completely fake and yet people are still sucking adoredTVs dick. fuck that guy and fuck amd fags that are this retarded.

I was tweaking the configs on the demo and saw that texture quality alone can go over 8gb of vram.

Texture streaming is most demanding on vram for 4K resolutions since 4K texture streaming can already take up to 6GB in games already out. The card is 16GB because it's a 4K card and you need enough headroom not only to stream 4K textures but also to run all the other bells and whistles.

>This Vega 7 is way smaller then a 2080 and still matches it.
Isn't it because of the Tensor cores? 1080ti was smaller than the 2080.

> What went so wrong?
It's cheaper than RTX 2080 and it matches RTX 2080 in price, so nothing? They wouldn't release at a loss if competitors were cheaper.

*in performance
I'm done.

>All I want is a card to replace pic related that gives me 144fps in new games at 1080p
That is not possible since most new games are unoptimized as fuck.
Just pick any ubisoft games and you won't get continuous 144fps even with 2080Ti.

Ok, navi is coming this year (maybe) but here is the thing(s):
#1 They launched for no fucking reason an RX590 that nobody asked just to finally beat the 1060 while costing less (still using much more power), if they showed Navi at CES just imagine how mad people who bought the RX590 would be and cannibalizing their products isn't a thing that AMD is able to do right now.
#2 Radeon VII came out just to shut up the whinny fagots who wanted it, they put the price high just because they KNOW they won't sell so many of it.
#3 Navi technology will be present at the Xbox Scarlet and PS5. By showing it right now would take all the "magic and hype" of the new consoles that are coming maybe this Christmas or maybe next year. I believe that MS and Sony are the most important clients of AMD right now and they don't want to piss them off.

yes companies never change date or info released

lmao dude, if they had anything to say about navi, they would have said it WHILE HEADLINING CES.


you really think AMD wouldn't have done what they could to steal the fucking show???


The best they could do is run cinebench, and specify that its early engineering sample, and we didn't even get to know the clockspeed.

listen, about a week before CES that nippon guy leaked the 12 core SKU. I think that was legit. Everyone said the clock speed was wrong and that the SKU was named differently but actually theyre all retarded and turns out AMD is only just now making ryzen 3000 engineering samples.

the leaks were fake pull your head out of your ass. zen 2 is never going to clock to 5ghz on a stock 16 core chip. We know how 7nm is going to perform, they're not pulling magic out of the air. radeon 7 only boosts 200mhz more than vega 64, its not magic.

It is not cheaper, you fucking faggot

You can buy RTX 2080 on Newegg for $699 and it's faster than Pooga VII garbage

newegg.com/Product/Product.aspx?Item=N82E16814487415

newegg.com/Product/Product.aspx?Item=N82E16814932086

newegg.com/Product/Product.aspx?Item=N82E16814126261

newegg.com/Product/Product.aspx?Item=N82E16814133756

they showed that intel will get destroyed by zen 2 with that cinebench. some quick maffs using the vega clock difference shows that zen 2 hitting 5ghz really isnt unlikely.

>literally claimed soft launch
>never said nothing close to that

>nobody needs 16GB
>every ought to be happy with 8GB
Good goy

AMDownies will defend that shit.

I can buy Vega 56 for 320 yuros despite MSRP of 400, so unless there are cards and benchmarks in the wild, we won't get the full picture. 2080s are $700 on Nv website.

Definetally strange they didn't use 2stack and instead used 4stack, looks like aborted mi6o rebrand

> looks like aborted mi6o rebrand
Again, there were rumors of Mike Rayfield pushing this before 20xx announcement, AMD's internal tests demonstrated it'd cost about $850 to make and will be able to compete with 1080Ti. So, it was bound to be a failure ; apparently, they've found a way to cut some costs to make it fare a little better.

>Mi60

It's a MI50 (60 CU version, not 64) you tards.

Shills still defend this

Attached: Screenshot_20190109-170254_Clover.jpg (1432x1879, 815K)

Vega and Fiji/Fury didn't need more compute/CUs, they needed more fixed function graphical units: geometry/tesselation and rasterization front and back ends. AMD has been repeatedly trying to service both HPC and gaming markets with a unified line of GPUs and consequently skimping out on the parts that serve to benefit to the pure compute markets.

Vega tried to emulate higher geometry throughput with a Primitive Shader/Discard engine that could throw away 0-px and back-facing triangles earlier in the pipeline, but AMD was unable to get it to work on existing APIs and games, so the cards were stuck at the same anemic 4 triangle fragments per clock as Polaris, Fury, and older GCN parts.

"destroyed"

They got exactly the same score within margin of error.


Zen has better IPC than coffee lake believe it or not, clock for clock AMD generally does the same work faster at much less power. If they only matched a stock i9, their 8 core ryzen 3 chip probably only clocks to 4.7. Now I want you to imagine how hot that die would get, at ces they pulled 133w. Now imagine two of those dies side by side both at 5ghz. A 16 core at 5.1 ghz is going to make so much heat that liquid cooling is the only option and even then its pretty unrealistic. This isn't uncommon from AMD but the whole thing about adoredTV's leaks is that he got all of the dates wrong and he was still wrong about navi. The sole fact that they didnt mention the clock speed at CES means that it was probably overclocked. Because if AMD could make their early zen2 chips do 4.8ghz from factory, they would have just said so at CES for maximum hype.

Seriously, this was their biggest event ever, they would have shown us the glory of zen 2 if it was as good as the leaks said. Imagine if sony showed up to E3 with a bunch of games and no hardware news, and then go on to announce the PS5 at their own smaller press venue in the middle of the year with absolutely no warning. Its really simple sales 101, if you have a stellar product you need to let customers know so that hype can build before release.


We saw all that AMD has to offer at CES

Attached: zen vs intel.png (1824x1026, 431K)

Less memory means it claps your memory bandwidth with HBM, it's gonna be 16GBs.

Radeon 7 is the kind of card you buy and hold onto for 7years.

you will use 16gb in 2026

if you buy a new card every 2years obviously get a 1080ti or 2080/ti at cheapest price you can get.


but honestly 2080/1080ti/Radeon7 is like double the power you need on a PC every one says you need it for 4k but the esports game you want high fps in already get like 150+fps at 4k lol on a 300$ gpu.

That was total system power, not socket power, you complete retard.
The CPU was pulling 75w.
>it was probably overclocked!
Yes, this non final silicon ES was totally overclocked. Pathetic.
You're probably a street shitting Indian shilling for rupees.

>We saw all that AMD has to offer at CES
mommy su have basically confirmed after the show that they will deliver more then 8 core cpus on zen2

1080 performance for $250!

'Sup?

Attached: images.jpg (266x190, 7K)

Buy Radeon Loli edition

Attached: Radeon Loli edition.png (1800x1100, 721K)

Nothing, it just a way of selling off stock that enterprise customers didn't want (eats too much power)

Nvidia does the same thing with the Titan brand.

Not possible without making new silicon. That's one of HBM's drawbacks. It doesn't scale down on its own.

>which nobody needs
Lmao nigger try running the RE2 demo on all max, 2x R7 couldn't do it at 60fps@1080p

Ah yes, my masterpiece instinct radeon MI8!

Attached: radeon-instinct-100698420-orig.jpg (1574x1162, 479K)

Not exactly. You can laser off an HBM controller the same as anything else.
The reason why AMD isn't releasing an 8GB variant to save money is that these are just reject enterprise parts. When they build these packages they don't test the die by itself, then add HBM on after. The interposer is fused to the package substrate, the die, and HBM are fused to the interposer, and this completed package is qualified all together.
Removing two HBM stacks would be introducing another step in the process, it wouldn't save AMD any money at all. The HBM they removed would most likely be destroyed in the process.

They aren't building these for the gaming market so they aren't going to divert some dies and start making specific 8GB packages for gaming cards. AMD's goal here is to reduce losses for the enterprise parts they're producing. Gamers aren't getting any special treatment.

>>Yep we will give 16gb which nobody needs
But my life needs tensor cores and gigarays. I can't go on another day without it. Just buy it.

but the jewish overlord said "WE LIKE GAYMEN AND WE LIKE GAYMING!"

Was in the same boat, but playing at 1440, 60
Ended up ordering a Vega 64 today. Pretty good upgrade for 480€, and you get 3 AAA games along with it.

This. 7nm is expensive and this is a good way to make back some of this money. I know it is rumored that Mike Rayfield was fired suggesting this but he was proven right for the wrong reason. AMD could have never released this card out for business reasons and competed on performance/price if the 2080 wasn't overpriced out the gate. Think about it, the RTX 2080's release price made it possible for AMD to sell 7nm MI50 surplus to gamers at a good price which is costing AMD a lot of money.

It also amazes me how people are underestimating that this won't sell anything when there is RTX out there. Even if very few gamers will buy it, there is no way the prosumer market will ignore it. There is no way this won't sell when it has possibly the best ungimped double precision (1/2) on an AMD card since Hawaii and is a monster at OpenCL compute. Only things that would be missing is the higher tier enterprise support features like losing the XGMI link, professional driver support, and SR-IOV.