AMD Radeon VII launch allocation: 100+ units for UK, 20 units for Spain and France

videocardz.com/newz/amd-radeon-vii-launch-allocation-100-units-for-uk-20-units-for-spain-and-france

techpowerup.com/252269/uks-allocation-of-radeon-vii-a-grand-total-of-100-cards

cowcotland.com/news/65981/amd-radeon-vii-un-prix-de-lancement-de-739-euros-et-20-cartes-disponibles-pour-la-france.html

>AMD's initial production run for the card is set to ship just 5,000 pieces worldwide

AMD lose money with Vega

Attached: unboxing-03.jpg (800x600, 128K)

Other urls found in this thread:

cowcotland.com/news/65981/amd-radeon-vii-un-prix-de-lancement-de-739-euros-et-20-cartes-disponibles-pour-la-france.html
guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html
techpowerup.com/252283/radeon-vii-priced-739eur-in-the-eu-france-and-spain-only-have-dozens-of-cards
pcper.com/reviews/Graphics-Cards/NVIDIA-TITAN-V-Review-Part-2-Compute-Performance/Rendering-and-Compute-Perfor
twitter.com/AnonBabble

>vega 7 is a 300w peice of shit

No way, more news at 10

HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAH

Wtf it was false rumors amdbros......

God I can't wait for the trainwreck of reviews tomorrow.

Attached: 1541207947343.png (480x360, 185K)

>cowcotland.com/news/65981/amd-radeon-vii-un-prix-de-lancement-de-739-euros-et-20-cartes-disponibles-pour-la-france.html
>739€
>Only 20 cards available
That seems about right for AMD sales expectation for their brand new housefire

god damn that's a nice looking vidya card

>buying a 300W housefire
>buying a card with no ray tracing
>buying a card with no DLSS
>buying a "limited" card for gouged pricing
>buying AMD

Attached: fredgelion.png (1200x800, 891K)

BREAKING NEWS!

VEGA 7 SOLD OUT IN FRANCE AND SPAIN!
ONLY 80 UNITS LEFT IN THE UK!

>buying a card for proprietary meme features that will never be standardized and won't feature in any games beyond the initial hype train

Everytime.

Sounds about right. The numbers are a good reprensentation of AMD enthusiasts in those areas, without the crypto skew.

So buy one fast and sell it for more shekelz on ebay?

>in any games
Nvidia is also fun outside of games.

I hope Intel has good gpus next year. AMD is hopeless.

>300W
poorfag much?
>raytracing
pseudo-raytracing
>no DLSS
upscaler
>buying a "limited" card for gouged pricing
well, you can buy nvidia for gouged pricing without limited editions
>buying AMD
only sane option for Linux

>muh cuda

>meme tracing
There is DLSS in DX12, only real thing to buy nvidia is CUDA, since ROCm sucks.

>muh nothing but aa-less vidya
You know where you need to go now, correct?

Remember 6 months ago

>Vega can't be bad because it sold out!

OpenCl is a mess

Good thing DirectCompute exists.

>ok when nvidia does it

Attached: higherthanvega.png (822x399, 964K)

Nvidias are useful. AMD GPUs aren't.

goddamn you're retarded

fuck off jensen

>1.5% higher power consumption than Vega 64
>85% higher performance than Vega 64
I don't see the problem with that.

Fuck off /v/fag. Go turn off as, and play some vidya gaymes queer.

...what's the problem? If it's true, what's the problem? In the worst case scenario is a GPU that ties with the 2080 for the same MSRP. I don't count Raytracing meme as a feature the same way as I don't count 16gb of vram. Both are useless and gimmicky.

>for the same MSRP
No cuda, so no good. Bai Bai now.

300W is about more than electricity costs. I don't like running a card and having it heat my room up, especially in summer.

Meme

Vega is a 7nm card and it draw 300w.

Its a filler product, just like the RX590. Everyone is waiting on Navi.

Navi also fucked by GCN.

TRG is Pic related

Attached: 756876876876.png (237x528, 168K)

>no DLSS
guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html

Attached: Screenshot_16.png (838x514, 90K)

>300W means 300W of heat

just wait bro

>Implying it doesn't

>AMD lose money with Vega
We already knew that, the only reason Vega2/RadeonVII exists is because the stepping AMD were hoping was final for Navi was not final.

woah

Attached: 6a3471e4cd7b38be3fd4f856da071618.jpg (233x350, 17K)

>poorfag much?
this absolute dunce actually thinks it's about not being able to afford a higher wattage PSU and not the card heating up more than the surface of the sun

DLSS on Nvidia cards is processed by Tensor cores, leaving the actual GPU cores almost unaffected performance wise
AMD has no such technology and GPU would have to do both things in parallel, resulting in a solid performance drop, no matter what mental gymnastics drones go through

>DLSS on Nvidia cards is processed by Tensor cores, leaving the actual GPU cores almost unaffected performance wise
>These extra cores definitely don't use power or contribute heat!

what the actual fuck are you even talking about?
Two separate sets of chips doing two separate tasks without one taking a performance hit from another, that's all i said
Never said anything about heat or power use you fucking thick skull

They're both powered by the same VRMs
They both have the same heatsink.

You're an actual idiot mate.

lel amdrones are already defending the price because muh hbm2 and 7nm think of poor AMD
the same retards who were shitting on 2080 pricing 24/7

Attached: tidus.jpg (475x343, 30K)

>AMD lose money with Vega
No they don't. These cards are enterprise rejects. Instead of binning them, they recoup some of the sunk cost of the $Texas enterprise cards.

Losing $300 on a $1000 card that would get binned is better then losing $1000.

Besides, I am sure they break even on them.

Eh good for them they haven't had a halo card since the Vega frontier edition (which also had 16gb hbm2) almost 2 years ago and it was a workstation card as well.
Just wish it wasn't another Vega especially with such low stock clocks and little difference over 14nm
Same power and not even a 20-40%+ performance jump is bad and I say that as a amd fan.
Why can't they get their gpus back to snuff? Last time they made anything decent was polaris and Hawaii-xt
Also had a v56 so I'm fully aware of how lopsided Vega is

Vegas are compute powerhouses. Their "gpu" part is what sucks. Tapping into that number crunching in some way would get vega an edge.

The main reason why GCN cards suck in so much juice is because their stream processors need to get powered, but they are not utilized 100% because of relatively weak front end.

>Vegas are compute powerhouses.
I'm tired of this meme. Titan V crushes vega and it's still slower than Turing when the latter uses its specialized hardware.

>No they don't. These cards are enterprise rejects.

UNSOLD MI50* :^)

You could use throw instead of bin there since binning has an specific meaning in silicon manufacturing, my Brit user.

are you an actual 40IQ "human" being?
I said LITERALLY NOTHING about power consumption
Tensor cores are used for one kind of computations
Traditional GPU cores do traditional rendering
DLSS works, because the calculations for it are performed on Tensor cores leaving the normal GPU cores fully available for the task they're performing and not taking a hit.
AMD lacks the extra hardware for lifting DLSS performance hit, which is why it'd work poorly.
Why the fuck are you even babbling about heat or VRMs if they're unrelated to what we're discussing?

>Vegas are compute powerhouses
not exactly powerhouses, but capable
thing is, DLSS would still have to be processed without hardware dedicated to it, resulting in a performance hit

WAIT FOR NAVI REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Attached: 1516295748096.png (1228x1502, 1.07M)

do you think lisa would agree to be my mom?

>12.5 vs 15 GFlops fp32
>crushed

Jow Forums is for >100iq people. You have to leave.

Are you dense?
Are those Tensor cores drawing power from the ether? Some sort of zero-point energy module on-die?
No?
No.
They're drawing from the same VRMs as the CUDA cores - so if the CUDA cores using all the available vcore power (which they will be unless you disabled turbo) then there will be no power budget for tensor cores.
To run the tensor cores, you are going to have use less power for CUDA cores, this means less clockspeed and less performance.
Your entire argument is that somehow Tensor cores are 'free' when they are not, since using them removes available power from the CUDA cores.

Again, you're actually retarded.

techpowerup.com/252283/radeon-vii-priced-739eur-in-the-eu-france-and-spain-only-have-dozens-of-cards

HAHAHAHAHAHAHAHAHAHAHAHAHA

Attached: disassembly7.jpg (1356x600, 278K)

After you
pcper.com/reviews/Graphics-Cards/NVIDIA-TITAN-V-Review-Part-2-Compute-Performance/Rendering-and-Compute-Perfor

and now, for fuck knows what reason, you're bringing CUDA into this. How do you tie your fucking shoelaces without your parents' help?
VRMs are DESIGNED to handle full capacity loads. You can run everything on the package at full load and there'll be no power shortage. You're actually more likely to hit thermal issues if your cooling system is poorly designed.
Please don't try arguing about things you know fuck all about, you utter bumbling moron.

You really are clueless.
Have a nice day marketeer.

Wait for navi

thank god you at least stopped pretending to have a leg to stand on and went
>m-m-marketer!
even though i've been using an RX 480 for the past two years

Are you fuckign retarded? Here's an explanation for brainlets. It's the same as trying to stream a video off your GPU without a dedicated asic while playing a game. You would take a significant performance hit without the dedicated hardware, retard.

ah yes, because Nvidiot RTX has dedicated RT cores and those don't impact fps by 50% percent when RTX ON

moron

>Paying 900 dollars for an overpriced piece of shit that has a 40% failure rate.
>Muh Ray Tracing at 40 fps with after spending 1500 dollars
No thanks.

Attached: 1529068991411.jpg (234x221, 12K)

wow, it's almost like those RTX ON things use the whole package instead of JUST Tensor cores, i feel defeated

>completely avoids mentioning dlss and redirects to rtx
you're retarded

>$700 card trades blows with $3400 card
>lol crushed

none of this meme shit would be necessary if nvidia hadn't sabotaged FXAA 4.0 in the first place

Attached: fxaa4.0.png (1147x1177, 1.17M)

>1080 Ti like performance for 1080 Ti like power consumption
Why is this a problem again?

Why would that make a different if it's using the whole package?
The Tensor cores are separate from the the rest of the package.

>moving the goalposts
just wait for the "untapped" potential of the dumpster fire GCN

can't they check performance of a chip before gluing HBM?

FINE WINE

Attached: 1543807117686.jpg (640x480, 61K)

>trades blows
lol

Attached: vray.png (602x318, 99K)

FXAA is such a blurry cancer tho

anti aliasing and blurring is the same logical operation. you cannot get one without the other.

its blurry compared to other methods, I meant

it's only "blurry" compared to methods that don't work. FXAA 4.0 also was nothing like 3.0 and would have been much more flexible.

why are we comparing a gpu that's over 3 times as much as another one?

that fucking cope from this nivida fan boys is funny as fuck dispite as knowning next to nothing about it.

I own an nvidia product and i know damn well that they have 0 gaming 7nm gpu's
the fucking cope lol

The argument is that vega is comparable to titan v in compute, not price.

it isn't in either category

Because, contrary to Nvidia's marketing, RTX effects don't run just on tensor cores

can't see any benefits from 7nm on Radeon VII, only maybe at 12nm it would have drawn like 600W or some shit kek

The original argument is that vega magically would do somthing that even Titan v would struggle to do because "compute powerhouse"

for the price of a titan v you could get multiple vega's and have much better parallel processing

>the amdrone flails ineffectively
Says a lot that AMDead's first 7nm GPU only offers the same performance as the 1080 Ti, a 16nm part released two whole years ago, whilst consuming MORE power.

What a fucking disaster of a company.

>multi 300W housefires
no thank you
just a 2080 if you need DLSS badly would suffice
or Titan RTX if you really need something for work

I just want to know the exact date and time for the NDA to lift FUCK.

Not "waiting" for untapped potential. Merely point out that GCN _is_ a leaning towards compute. A "dumpster fire" if you will. Everybody knew that since it was introduced, and it has gotten worse.

For the price it is a very compute powerful arch.

_IF_ AMD was able to get something graphical out of that compute, _THEN_ that thing would be, essentially, free.

any developer can use compute any way they wish to

I _guess_ they can, but not all cards still make it. Some might not hit power targets, and once they already have the chips, which they paid for, then binning them into full cards which would otherwise get trashed is a logical choice.

This also, kind of, explains why there are just 5k, or thereabouts, of those cards. They are not specifically make them - but if crumbs out of MI50 drop, its logical to use them.

Delicious cherries.

You sir, are deliberately being a tard. It _is_ a powerful arch. Just because it does not rape just about anything does not make it any less powerful.

9900k is a powerful processor.
> LOL NO EBYN 64core be faster

That is you. Kys.