When will nvidia release Ampere?

Is it worth the wait or should I just get RTX 2060s?

Attached: NVIDIA-Ampere-Feature.jpg (1920x1080, 230K)

Other urls found in this thread:

nvidia.com/en-us/gtc/
nvidia.com/en-us/geforce/news/geforce-gtx-1660-ti-advanced-shaders-streaming-multiprocessor/
twitter.com/SFWRedditVideos

You should stay with GTX 1060. Wait... You DO have one, right?

Rule #1 of computer upgrades: You'll always find a reason to wait if you try to.

Attached: 1555255013894.jpg (4032x3024, 2.28M)

If you can wait for ampere you might aswell wait for the gen after that.

I have another rule live 5y in the past and thech costs nothing

Rule #1 when researching a GPU to purchase:

Always wait for the "tock".

The "tick" represents the first new fabrication (12nm)

The "tock" represents the the revision of the original fabrication, meaning the kinks have been ironed out.

PS Ray Tracing is way too early, just get the 1660 Ti and FUCK Sli

Wait and see how much more powerful it is even if it's not that impressive the prices or the 20 series will drop when it launches

Same thing for CPUs really. Zen+ (GF 12nm is practically a GF 14nm+ node) had virtually no launch problems compared to Zen and now Zen 2 issues. Zen 3 is probably gonna be smooth.

Mine question is simplier. Will they release pci-e 4.0 cards or will they skip current gen?

>Ray Tracing is way too early
I wouldn't say too early, but with current processing power it makes sense only in the top flagship cards. RTX 2080 Ti makes sense, 2080/Super maybe can be justified too, but anything below that? Just not powerful enough.

Why would they skip it? I mean, technically they can make their cards "pcie 5.0" but they won't be able to saturate even 4.0

Not even important.

I don't get why raytracing is better if it's real time? Why not just prerender it completely.

You can't prerender a video game

Yes but you can prerender the environment and every object that is static

Then you don't know shit. Stop talking about real time ray tracing then

Get ayymd or wait for it

The necessity for the real time part is what I don't get are moving shadows and reflections that important. I think you can have a real looking environment without rendering it in real time. Textures are far from realty and wold be a greater improvement than RTX most games have terrible plant models hair still doesn't look good in anything...

"prerendering" in the sense you're suggesting is what was being done up until now. But the fact that the lighting was "baked" in means that it can't be changed, can't react to any object. There's of course tricks that allow dynamic "lighting" and "shadows" but it's just that, tricks. If you compare these tricks to proper ray-tracing you'll always see there's something wrong with them. There's lots of comparisons out there.

Isn't RTX just another trick and is it a better one at that. In what way is it better than dinamic lightning for example.They are missing the point environment doesn't make the game more real just like in real life environment is the background if you wold have completely realistic looking character model's the game cold look like minecraft and you still wold feel like you are interacting with real people.

Zen 3 will supposedly be a big change architecturally so itll be even worse than zen 2.

>Isn't RTX just another trick
Not really, it is real raytracing, it's just that the number of rays that are traced is fairly small, so you need to either use some post-processing to hide the grain-side effects of small number of rays, or mix RT with old tricks and only use RT where it really makes a big difference.

>They are missing the point environment doesn't make the game more rea
You forgot to add "In my opinion"

Well then... fuck. I wonder how many months AMD customers will be paying beta testers when Zen 3 comes out then.

2020
It's gonna be 7nm vs the 12nm we have now, so a bretty big deal.

But the Next Big Thing is always around the corner. The metric to judge by is if your system is doing what you want right now. If it is, wait it out.

I'm gonna feel sad for AMDrones when nvidia goes 7nm. They were so happy when their 7nm Navis were able to get into the same neighbourhood 16nm Pascals and 12nm Turings live in. Poor guys.

Turing is overpriced shit built around early adopter memes. Ray tracing is brand new, costs a pretty penny, the game support is lackluster, and the performance will be absolutely blown the fuck out by the next generation and the one after that once the technology matures a little more. I don't know why they glue all those fucking deep learning cores onto consumer cards either.

AMD will just slash the price with 7nm+ and have RDNA2 out by that time.

Maybe in the future the will make something impressive with it but at this point it's just an small upgrade. And I've seen similar results using "tricks"

I don't know about that, they could've priced Navi better but chose not to.

anything below 3x performance of 1080Ti is not worth purchasing right now. just keep waiting.

t. comfy at my 1080Ti moved from 970 from 7850 from 4670

Cool. Story braw

thanks breh, don't forget to like and subsescrib

>Well then... fuck. I wonder how many months AMD customers will be paying beta testers when Zen 3 comes out then.
Just buy like 6 months after when prices start to go down. I bought a crosshair 6 and 1700 at launch and took about half a year for BIOS to become fully stable. Even then I had to rma it for the segfault issue

...

ALT 128077

That's the real meaning of AMD FineWine™. At the beginning it's not even wine, but grape juice.

Why would they price it better when Nvidia overpriced so hard? They are easily beating Nvidia right now with it with both price and sales. I expect Nvidia to overprice their 7nm as well.

So... Why would they price RDNA2 better if 7nm Nvidia is gonna be overpriced again? Nope, RDNA2 will be just as overpriced, AMD is very comfy following Nvidia's shitty pricing strategy.

>ALT 128077

The original point is AMD being in trouble against Nvidia 7nm, which they wont be because they will always have the lower price.

gtx1060 is the most used card according to steam survey, what card will be the next people's choice?

1070

There's virtually nothing stopping Nvidia from lowering their prices besides the greed. They're not like intel that needs high prices to maintain their fabs and offiices everywhere.

Shadows and reflections based on objects in game and how light behaves is far better than prebaked garbage.

Coping nvidiot. Bet you're an Incel fanboy also

590

Both 2060's are pointless due to the 5700.

>I wouldn't say too early
>but it's just not powerful enough
Retard spotted.

so that faggots with 8 cards can run deep learning sims and bitcoin mining on it even faster

nvidia.com/en-us/gtc/

>See you at GTC 2020 in San Jose, March 22 - 26.

Ampere 7nm will be announced to succeed Volta, consumer GPUs probably won't use Ampere's microarchitecture but a modified one just like Turing is modified Volta

>the necessity for [...] real time [...] is what I don't get
You don't get it because you don't understand the physics behind shadows and reflections.
If you did you wouldn't be saying stupid bullshit.
If it were possible to prerender raytraced video games *then people would be doing it*.
Unfortunately no one likes video games that aren't any different from movies.

If you want accurate shadows and reflections, they must be raytraced. If you want raytraced video games, it has to be done in real time.
Raytracing does not have specific steps for shadows and reflections. You do not raytrace to make shadows and reflections. Shadows and reflections are FREE and a BYPRODUCT of raytracing.

buy last years tech for 1/2 price

Don't ever do this, new games are designed for newer GPU architectures and Pascal without concurrent execution simply will fall flat

Always buy the newest GPU microarchitecture

nvidia.com/en-us/geforce/news/geforce-gtx-1660-ti-advanced-shaders-streaming-multiprocessor/

Attached: geforce-gtx-1660-ti-concurrent-float-and-int.png (3336x1704, 290K)

I'll be upgrading to a 3080Ti from my 1080Ti when it comes out. It's alright for 1440p high detail but it could be better. I have a 144hz monitor

We don't know shit about it. No, you should not wait for future products which haven't even been announced officially.

nvidia biot savart when ?
fucking marketing idiots

user literally said that the 2080ti and in some cases the 2080S are powerful enough just not the rest kek

Don't listen to this guy either. Always wait for the revision. Do not buy bleeding edge tech.

Look at what happened to R9 Fury.

A dynamic light map even 80% close to ray-tracing, for a single level forget open world, would be gigabytes large. What you're asking for is to take any single game and make it 10x the size, and take 100x the time to process the lighting.

Real-Time-Ray-Tracing is the only solution to better visuals moving forward. But like how high quality 4K only became truly viable in the last 12 months, compared to 4K being available for 5+ years, RTRT will only truly be viable after a few more years.

>RDNA2
Navi is RDNA-lite, an amalgamation of "classic" GCN with some streamlined units.

The real RDNA is going to be Ray-Trace capable (to some degree) and built on 7nm EUV, expect September of next year,

No lol. Retarded post. Id bet money all day amd wont make a "ray tracing" marketed card.

You mean dont buy amd lol. Fury was trash from day 1

>2060s
lmao

12nm is a tock of a 16nm, retard.

samsung's 7nm process is fucked up
expect it to come out later

Example

OK that is a problem and a good point you win

Ampere won't show up for at least another 8 months.

You have no idea what you're talking about. Pre-calculated lighting will always be infinitely higher quality than any real-time solution, because you only have to render the lighting a single time. That means you can crank up the realism to absurd levels, and let your PC run the calculations for as long as you want. You obviously can't do that in real-time, so RTX uses extremely simplified ray tracing algorithms to roughly simulate what pre-baked lighting does.

No real-time algorithm will ever come close to achieving the realism of pre-baked, because no matter how good your algorithms are you still have to render it in real-time where as static lighting needs to be baked only a single time, over any period of time. You'll never get those beautiful accurate in-direct soft shadows that you find in V-Ray or UE4 bakes with real-time algorithms, because it took those ray tracers many hours just to calculate a single pass. It's impossible to render such accurate lighting in real-time, not even supercomputers could do it.

The main disadvantage of pre-baked is that it cannot be modified during run-time. Still, it is the preferred choice for most linear games where the main lighting does not need to be changed during gameplay. The gains in quality outweigh the loss of being able to dynamically alter the lights at run time.

Most popular engines like Unreal 4 and Unity use hybrid solutions, where the global illumination is statically baked in order to achieve extremely realistic lighting, but the direct lighting is dynamically calculated to allow for some artistic freedom as well as accurate specular reflection. Dynamic objects can be added into the scene and are dynamically lit using lighting data from a point grid made during the bake, they seamlessly blend in with the statically lit surroundings. You can even add fully dynamic lights into the scene.

regular fury & nano was based, and aged better than the 980

Of course you can fake lighting with a higher quality. The problem is, in modern games you have too many dynamic objects. So, you have to use cheap lighting imitations a lot. They don't look good. Overall ray tracing gives you a more complete image that doesn't fake anything, it's there for every single object if you want. Now, sure, you can go back to static one time of day bullshit with no movable anything except characters, but that doesn't look so much better that you'd want to sacrifice game world interactivity.

I have a Radeon vii and I'm waiting on that

That's pretty much my situation.

Sold my GTX1070 for $450 a month ago, waiting for Ampere XX80 (Ti)

Attached: HWiNFO64_2019-09-14_11-59-11.png (2560x2308, 248K)

I never said it is going to be a ray tracing marketed card. I said
>is going to be ray trace capable (to some degree)

You'sa motherfuckin retard.

No, I got a 390X ;) XD

This is what I´m going to do, I´m tired of waiting for fair prices in RTX, a 1660Ti and get rid of my 760, then wait a couple of years for TRX 3xxx or 4xxx and see what happens, now is wasted money.

Attached: Sieg.png (3336x1704, 297K)

This is what I did and I have no regrets.

With the 1660 ti you are getting all the sweet rasterization optimizations without overpaying for rtx that you will never use properly before the new gen of cards anyway.

Just a reminder, the whole real-time "ray-tracing" that Nvidia is trying to employ with its RTX cores is just a long-term gambit on making discrete GPUs revelent to the masses.

iGPUs are starting to become good enough for masses. RDNA's real form will come in next-generation of iGPUs soltuions. Intel will play catch-up. Nvidia is left in the dark due to a lack of a x86 license.

Discrete GPUs will undergo demand destruction and become an niche like other internal peripherals (NICs, HBAs, Audio cards) that used to be discrete only.

>he really believes this
Come back when APU have 4 channel memory at 14GT/s instead of anemic 2ch at 3.2-3.6GT/s

That's another 2-3 years at most. Once AMD can figure out the sweet spot with active interposers and GPU chiplets, we're set.

Intel is already going this route with Foveros. Nvidia's lack of x86 license is going to murder them in the future.