Will they destroy Nvidia in 2019?

Will they destroy Nvidia in 2019?

Attached: 1535769774463.jpg (672x371, 45K)

Nvidia? Definitely not.
Intel? Yes.

Nvidia fucked up with the RTX series. They'll get a good beating.

Especially with those prices. Amd has really dropped the ball on Vega pricing.

We sure about that? the 2080ti is completely sold out, the 10 series is still selling at or near MSRP. Nvidia, as much as I hate to say it, appears to be stronger than ever.

>Nvidia fucked up with the RTX series

Not as bad as how AMD fucked up with Vega. Atleast they outperform their competitor with an older lineup.

amd have NOTHING, their faster card is the Vega 64 and they use the same chip for their pro, ai and compute cards.

Their next lineup will DESTROY the competition.

On PC? No, everywhere else? Probably. PS5 and New Xbox alreadly confirmed to use AMD SoC, Apple products getting Vega GPU and servers getting Instinct for compute.

AMD has a great opportunity to put some serious hurt on novidya if they do it right.
If 7nm vega comes to gaming and can come close to the 2080ti at lets say a $700 price point? It would sell exceptionally well.
That would put your next card that matches the 2080 at roughly $600 which would make both cards a great value when compared to RTX.

I want them to do this but they simply wont, if they offer similar performance they will charge at least 1k on the 2080ti and 750 on the 2080 comparable cards.

Novidya is so deeply en rooted the only way AMD can steal market share is to offer either the same performance way cheaper or be way faster(unlikely).

>novideo
>reddit spacing
You have to go back

Ive been calling it novidya since like 2005.

for AMD's Instinct to be useful they need something they can run on it at all.
Last time I checked it was all about CUDA.
Does AMD even push FOSS devs of major open source projects using CUDA to use something else?

They have a tool to convert CUDA code to OpenCL/RoCM

Do you faggots understand that most AMD GPU development is done in CANADA and Shanghai? You're putting your trust in leafs and chinks.

To be fair, AMD gambled on HBM and lost when the supply didn't pick up like they thought it would. That's what killed the price for Vega.
There's some speculation that there will be a 7nm Vega refresh with GDDR6. It would bring costs down by quite a bit if true, but no concrete evidence of a discreet product has come out yet.

But leafs are chinks?

fpbp

You fuckheads, AMD will totally win , just wait faggots.

>and they use the same chip for their pro, ai and compute cards.
you realize volta was the first time nvidia stopped doing this, right? it's not an outrageous thing

no, it wouldn't bring costs down, it would just put then right were hbm prices started.

Literally, by burning down the world

there is a good chance nvidia isnt going to seriously do anything 20XX till they get rid of their old 10XX stock that got returned, part of the reason for the price being that retardedly high is thought to be this.

at msrp, great value, after miner bullshit, not so much.

It converts CUDA code on source level and then it requires manual fixing by experienced programmers.
So, it's a porting tool, devs got to actually want to port their stuff to HIP in the first place.
But, will they? I'm thinking about buying an AMD GPU recently, for my new PC.
However, I'd like to be able to do something with it except playing vidya and recoding video.
I'd be only worried about chink part, I'm not sure how well could development go with them.
There's a chance that low chinese wages could affect their work quality. Dunno what you do have for Canadians, really, they're okay.

>the 2080ti is completely sold out
It's a massive chip which probably means horrific yields, no shit it's sold out because they can't make all that many of them (and they'll be hoarding the best dies anyway for the quadro cards that sell for 3x the price of gaymur shit)

>nvidia release shitty stopgap cards that their loyal fanboys will eat up and sane people will continue to buy pascal which had overstocking issues
genius marketing, really

amd has never really competed with nvidia on pricing, and pricing usually falls to die size rather than performance, and they haven't really changed what that cost is, despite nvidia trying to milk people for less.

as for 7nm vega, if they change nothing, they put out a between 1080 and 1080ti performance for around 200-300$, and above for 400~$ at the most conservative preformance uptick numbers for 7nm.

Its not going to be hard for amd to make a solid mid range gpu, and if they do fucking anything to alleviate bottlenecks, surpass a 2080ti, though this last bit is if they do... fuck knows if they can.
The more... out there numbers, have the 7nm process bringing amds performance up 90%, I don't believe these ones, but amd not only getting a die shrink but also alleviates bottlenecks... could be possible, unlikely but possible.

get fucked mutt burger hahahaa

t.leaf

Jow Forums -------->

>AMD has a great opportunity to put some serious hurt on novidya if they do it right.
yeah, ignore the top end of the market and work on being far better everywhere else

they're working on that hip thing or whatever to basically become open sores cuda which will either directly work with cuda code or will require minimal work to convert cuda stuff

opencl is a pain in the ass to work with compared to cuda because cuda embeds the gpu shit into regular c or c++ code and handles much of the management shit behind the scenes (so there's no making 50 function calls to create and fill buffers, compile and load your "program" into the gpu, execute it, wait for results and then clean everything up)

the chip is nearly twice as big, i can see why they'd want a fuckload more money for it

plus you don't want to price it too low and have cunts snatching them up for AI research and shit instead of the more profitable tesla and quadro cards

fuck off Jow Forums

Yes. I hope I'll see HIP in wide use and won't have "tfw you want to run cuda but you bought amd"
Also, if Intel makes their own runtime for HIP and if there will also be a CPU-only runtime that would be a huge gift for anyone who wants to write compute code.
Anyway, I think I'll buy myself an AMD card, if I want to run anything CUDA-only I can always try asking my brother nicely.

Just wait (tm)

Why is AMD such utter shit?

well, here is what I see happening.

first off, nvidia pulled the trigger on chiplet gpus. nvidia has been using the same architecture since maxwell, so when they hit the wall on what this can do, they changed the way the game is played, brought in ray tracing.

The most taxing shit to do on a gpu is shadows and lighting, and with nvidia they could offload that to a new section of the gpu and free up resources to go faster.

turing is a stop gap gpu, its right after pascal but its hitting a wall on what the shaders can do as they are bottlenecked/near bottleneck, but there are no games to justify full steam ahead chiplet.

now, traditional games, the shaders and rasterization sucks all kinds of dicks for parallel processing, requirenting game devs, who I hope we all know are fucking garbage with few exceptions, to write the code to fix/utilize it perfectly. however, it takes next to nothing to parallelize ray tracing.

Now here is the important bit. ray tracing being chipletable is a big step forward, as nvidia could mcm a fuckload of 50-100mm^2 chips, for cheap, and mcm them brute forcing ray tracing, and with the ai smooth the shit out, probably get to something passable. Possibly they will find a better more efficient way to ray trace, but brute force is more likely up front.

now, here is where amd comes in, amd could also do this, and could also brute force it or find a better way, but due to the conoles being locked down, they will probably be able to go full raytrace before nvidia, and with sony/microsoft/possibly nintendo footing the research bill, may get ahead of nvidia in this regard.

but make no mistake, if amd is better or not, people will still buy nvidia exclusively.

fuck you /v/idiot

They cant even beat intel, nvidia is like the final boss

I haven't played with HIP or CUDA yet (beyond reading a couple basic cuda examples online) but my work mostly doesn't go near that sort of thing. I have messed around with OpenCL back in the day and it was as annoying as OpenGL to get anything done so I can see why CUDA dominates. Hopefully HIP delivers a similar experience so there's no real excuse for not using it.

everyone here will buy nvidia because it has hardware waifu2x

whoa, user, look at this. btw. you can already run waifu2x on AMD hardware, just use caffe version.
I hope HIP does deliver a similar experience on all platforms that matter so devs like me will be able to accelerate compute stuff without making only nVidia buyers see any difference in performance.

>nvidia could mcm a fuckload of 50-100mm^2 chips, for cheap, and mcm them brute forcing ray tracing
Does Nvidia have the tech ready to go for proper MCM like AMD though?
>now, here is where amd comes in, amd could also do this, and could also brute force it or find a better way, but due to the conoles being locked down, they will probably be able to go full raytrace before nvidia, and with sony/microsoft/possibly nintendo footing the research bill, may get ahead of nvidia in this regard.
Considering their push for ray tracing in the Vulkan suite and support for DXR mentioned in their recent show slides, they're likely going to do it with Navi and/or as part of the Vega shrink.

AMD drivers are garbage, cards run hot as fuck and the price difference is not worth it if you’re not poorfag tier.

t. upgraded from r9 290 to 1080ti

nvidia has as many mcm patents and amd does, and they do a fuck load of research in that area because they will get to it eventually, but we are getting to a hard limit of transistor size, and unless nvidia wants to keep making 800mm dies, they need to chiplet, ray tracing is just the simplest way to do it.

Most professional math, and pro render loads parallel beautifully, so amd and nvidia doing mcm chiplets for them in a more traditional shader load is already doable, the problem is game deves can never get their shit together. the ray trace load is also likely if not an exact derivative of tensor cores, so moving those off the gpu they would be able to pair a low power gpu with a fuckload of chiplets, effectively brute forcing power, while taking advantage of chiplet yield numbers,

As far as full ray trace goes, its not going to be navi, unless amd pulls something nvidia crushing out of their ass, and make no mistake, if amd could full raytrace while nvidia is struggling to get even partial to work this would kill nvidia in gaming, full ray tracing itself could potentially halve the cost of game production if the shit I have seen is correct, but this is not going to happen right away, though it could very well get bruteforce up to being ready now if chiplets happen + ai denoise (personally I like the noise so long as its not too bad, most brigade demos I found to have an acceptable level of noise)
it could be a generation or two out.

I imagine that though optimization alone they will get the ray trace areas to be 2-4 times more efficient, if not more, as it is a relatively new way to do a shader core (I believe they are 2x2 fp16 matrixes) and once that happens, chiplet the thing to 10-12 chiplets and you will have passable real time ray tracing. currently I believe the ray trace area is 1/6th the full die size, so around 120mm^s and with 7nm and economies of scale, a chiplet like that could be had for probably less then current 2080 ti prices.

Unironically yes. The only reason Nvidia rushed RTX is that they know something. They needed to rush to the market for goys to buy their overpriced cards before amd.
Amd cards have always been superior in raw compute, that's why they were so desirable for mining. The only reason Nvidia was leading in gaming is optimization, drivers and deals with developers. If there was some kind of gaming task that could be brureforced, like, you know, raytracing, they would get on top of the game.

Yes.

Attached: data lal.gif (245x193, 809K)

I really hope AMD can pull a Ryzen with their GPUs
>much cheaper than competition
>Pushes tech forward, stopping Intel from dragging their feet
What's the chances of AMD coming out with a raytracing card in the near future?

Nope, AMD will never destroy Nvidia even if they decimated them on every front.
Why? Because Nvidia fanboys are the worst fanboys in the PC world alongside Apple

AMD has been pretty transparent with their recent product launches without counting Vega originally, they are currently suggesting about a 35% bump in performance on the pro cards. That puts it pretty close to a 2080 if you just do some basic math, now i know it may or may not scale like that but we also dont know if driver optimizations will be a big deal like on the 7xxx cards.

the 35% is solely based on node uptick, and due to it being a node uptick, its a 35% across the board.

however that vega20 or whatever it is had a leak of 90%, Im very skeptical of this, but can see it happening with a mix of node jump and bottleneck alleviation that didn't work due to very early stage bottleneck alleviation being fucked.

I want to lean on 35% being a more standard number, but amd has another 30% they could make up in gaming performance via bottleneck alleviation alone.

I want to say the 90% one was fake, but it could also be a beefed up card, as pro applications don't hit the same bottlenecks, but i cant think of a single reason to do that because its 100~ million for the optics, increasing the die size, along with not selling as much because nvidia kind of crushes it in most applications that would be a pro load, and the areas it would have been great, the ray tracers shit on it too.

With proper alleviation, amd could be a solid gaming gpu, but in terms of float 16, nvidia is able to bull 100tflops out of an area that is sub 200mm^2 in other words, amd would need to be able to get 50tflops single in 200mm^2 to compete, and nvidia has 2 clusters of these on their gpu, one being raytrace one being ai tensor that I beleive are matrix shaders.

In the near term, it seems likely that AMD intends to die shrink Polaris and refresh it, giving Nvidia stiff competition in the mid-range.

Right now, Nvidia is virtually free to do whatever they want and charge whatever they want, hence why Pascal still costs MSRP or more, and Turing is overpriced to hell. They're going to ride that gravy train as long as they can.

>Because Nvidia fanboys are the worst fanboys in the PC world alongside Apple
For fanboyism nothing even compares to apple.

>For fanboyism nothing even compares to apple.
this
afaik people don't get nvidia logo tattoos

With the cpu? Maybe. With the gpu? Nah

Intel already got BTFO by AMD so hopefully AMD will release their new RX cards in november.

Attached: 1538939424512.png (327x316, 208K)