Rtx is a failure

youtu.be/jaUP4LucmZM
1399.99USD for this grainy ass shit reflections at around 14mins into the video
Dice also fucked up and didn't enable it for all reflective surfaces

Attached: Screenshot_2018-11-21-00-53-00-56.png (1920x1080, 1.8M)

Other urls found in this thread:

youtu.be/jaUP4LucmZM
youtube.com/watch?v=nYLLvOFSHCU
youtu.be/nYLLvOFSHCU
youtu.be/AvXra6yqEZs?t=152
twitter.com/AnonBabble

What are you talking about? 1 sample per pixel JUST WORKS

EVERYTHING JUST WORKS

Attached: 2018-11-20 10_25_38.jpg (3840x2160, 1.7M)

Just imagine how grainy this shit is gonna look at higher resolutions and refresh rates

You should post the reflections part with the camera moving to show how awful the ghosting is, as the samples are built up over multiple frames.

the absolute state of jewvidia

LMAO

NVIDIA JEW FAGS ON SUICIDE WATCH

The film grainy shit? Yeah the noise looks insane.
No way in hell this is gonna take off until they can push 2-4spp+ at 4k and even then it's deminishing returns visually I mean we have branch Ray tracing and tonnes of more advanced tech then this
Nvidia is absolutely fucked betting the future of gaming graphics on this shit and then charging 500-1500usd for gpus that Csnt even muster 1080p 60fps is insane
Flop

I don't really see the issue with RTX. Nvidia is charging a premium for bleeding edge hardware that can be considered in an alpha or beta stage right now.
If you don't want it then you can just buy pascal right? Not like amd has anything competitive with that.

i dont care about noise it just doesnt look natural to me

doesnt water act like a mirror? i dont see that at all

RTX IS LITERALLY WORSE THAN SCREEN SPACE REFLECTIONS

ENJOY YOUR $1K 20FPS PIECE OF SHIT

Attached: absolute state.png (1440x810, 1.67M)

hideous

It's the rays building up.
When the camera is moved as they build up, they "ghost". You can see here in this picture how they windows are being dragged along as the camera is moving forward or to the side.

>youtu.be/jaUP4LucmZM
The noise is precisely why it doesn't look like a mirror.

The window one was the funniest one, with the wood over it being occluded from the reflection. But you need to see that in the video and doesn't show as well in a screenshot.
>It Just Works

But yeah, it really is worse than screenspace reflections.
Sure SSR isn't accurate, but they're fast and aren't all grainy and full of artifacts.
Combining planar with SSR or cubemaps is the best method currently and will remain so for at least 2 years.

Attached: firefox_2018-11-20_09-21-21.jpg (2084x1174, 430K)

Enjoy your static shadows.

>screenshots of youtube videos

Attached: 1507569501548.png (678x563, 380K)

>some interns rushed implementation of ray tracing means RTX is bad
???

Holy shit nVidia couldn't fuck up moreeven if they tried.
Literally what the fuck were they doing since Pascal? They had no real competition on the high end, they sell a fuckton of cards and have tons of money. How does one fuck up so bad.

Just watch the YT video in 4k you mongoloid nigger.
The grainy effect of RTX can be obviously seen.

Attached: rtx.png (2559x1079, 3.13M)

Jensen Huang said that it just works because the developers don't need to do anything because it just works everything just works global illumination just works reflections just works everything just works

Gameworks & Physix 2.0 goyim.
BFV is THE FIRST game with RTX launching, they better have had something kickass to entice everyone, instead they got this turd.
DICE is already one of the largest studios working for EA, one of the largest publishers. If they can't implement this shit right, who will?

Dead on arrival tech.

Dice is a fucking leader in game development and optimization. The fuck are you talking about, retard?

Even if it did work what's the point when the games are bland shit.

>game was already in the works before RTX was announced
>they had to work on adding RTX features before they could even test out the features on an actual RTX card
Yeah nah. Both BFV and tomb raider have pretty bad implementations of the tech simply because they are trying to implement them into something that wasn't made for it. Only a few parts of the game actually showcase the technology properly and they obviously couldn't optimize it simply due to those reasons.

youtube.com/watch?v=nYLLvOFSHCU
Based Digitalfoundry showcasing how Dice fucked up the ray tracing.

youtu.be/nYLLvOFSHCU
Df spend 10 minutes dicksucking the game and barely talk about rtx at all but they did say there is improvements coming according to the devs they spoke to some weird hybrid reflection screenspace+rtx dxr stuff as well
Tbqh a 1300usd gpu should not have such issues especially with dogshit performance and obvious architectural bottlenecks. Ffs the card is bottlenecked by rtx when you enable it it's essentially only running at 1/2 speed the cuda cores arnt even being fully utilized
Hardly interns you mong this is nvidias poster child for this tech and the only playable game atm
That's what many oldfags have been saying since the late 00s
While I agree it's early days I still don't think dxr and raytraced reflections are ready for release
Especially what we saw in remedies demo as well as atomic heart it's just the hardware simply isn't ready yet.
4k 60fps+ or 1080p with drops below 60fps with rtx crap on... Not really a hard choice
Well df said it's far from done but no way in fuck rtx is gonna improve in performance anytime soon
Yep it's confirmed

How did the atomic heart demos showcase that the hardware isn't ready?

They have a still frame to counteract (YOU)tube compression

Same grainy noise denoised issues bfv had
Plus let's not forget Dlss is dead in the water with no games supporting it

>game was already in the works before RTX was announced
Do you honestly believe that DICE only found out that RTX exists when Huang announced it on-stage? The developers of a game that Nvidia are an official partner for? Are you really that dumb, or are you being paid to pretend so?

And they still had no ways of testing out RTX properly because the cards didn't fucking exist yet.
I'd rather have slightly noisy images over shitty SSR garbage and broken mirrors.

I still use a 770 gtx.

Didn't they have it working on a Volta based gpu servercluster? Time constraints aside it's no where near ready for rime time.
1080p 30fps on a 2070 and 50ish on a 2080
I just turn ssr ao and all that garbage off in mp games their all visual clutter and inaccurate as fuck anyway
And? I had one of those 4gb model it was slow as balls since 2015 basically a oc 680
Have a 1080 now won't be getting a gpu for a very long time at this rate nvidia can shove 2k aud/eu up their fucking ass

They are literally charging customers to test RTX for them, NVIDIOTS BTFO!!!!!

Attached: Fullcircle.jpg (1599x533, 167K)

>And they still had no ways of testing out RTX properly because the cards didn't fucking exist yet.
Haha, you're a fucking moron. Hardware exists long, LONG before it's actually launched. Even final retail hardware is in production many months before it actually launches, let alone prototypes and development hardware. The fact that you honestly believe that DICE's first access to RTX hardware was when their pre-order arrived is fucking hilarious. Imagine being that stupid.

Back to /v/ with the other brainlets, kiddo.

Attached: vinny mac.gif (306x230, 1.45M)

RTX is literally just the AMD finewine meme turned into an actual product.
Using the RT cores for optimizing performance and improving image quality is what they are good for, but Nvidia rushed out the cards and heavily promoted ray tracing way before it's consumer ready.

>t. brainlet
you are literally unable to derail this thread with your shitty b8, just like nvidia is unable to put out non-faulty 2080ti :^)

not the other poster, but okay.

youre pretending to be retarded, everyone can just move along now

Dlss would have actually been pretty cool since it doesn't assfuck performance for useless shiny reflections
It's dx11/10 all over again
Nvidia even bungled dx12 in pascal and somehow fixed it in software/drivers as well
Can't wait for dxr to sit there doing nothing like mantle and every other niche api till the hardware actually catches up.
We won't see fully raytraced non rasterized games this decade let alone in several more
Maybe in 5-10years when this tech matures and arm/Intel get in on it shit will be worth looking at for games but for now it only really benifits developers

... Dice has some of the best programmers in the game development industry. I have learned so fucking much from them in doing my own graphics programming.
Way to make yourself look

idSoftware is the only company that's notably well ahead of them.

>they still had no ways of testing out RTX properly because the cards didn't fucking exist yet.
Wow, you're really dumb.
Turing does nothing special. It has accelerators for things we've already been doing for years. Things we've already tested for years.
You can denoise of general compute. In fact, Vega does it really well with its double-rate half precision.
You can ray trace with general compute. Just detecting intersections is a little slow.
Much of these DXR implementations were developed on Volta. Turing is a very minor arch update to Volta. Like holy fuck, how can you be so ignorant yet post with such confidence to spew out your ignorance?

Turing, even with this massive die size and its RT ASIC dedicated to ray tracing, is only about 2.5-3.5x faster at path tracing than Vega 64. It's not the massive leap forward that Nvidia claimed.

>optimizing performance
Then why does it look and run bad? Hur dur.

>bleeding edge
llollolololol

Speaking of id doesn't vulkan have rt functions now? Maybe doom eternal will use them and be a poster boy for these kind of rtx effects? I bet it will run better as well.

Imagine how bad normal devs will fuck up dxr though it's gonna run and look even worse than dices stuff

This shit really needs to be turn key doesn't it
Yeh makes my eyes bleed lul

Turing having DLSS, CAS and Async compute should ultimately boost it's performance compared to the 10xx series by a significant margin, but those first need to be more supported by the games coming out.
I'm not talking about raytracing, i'm talking about pure rasterization performance where the RT cores can be used to offload certain tasks that can be solved with the AI to increase performance.

>using machine learning to fill in the blanks of a subsampled physical process like ray tracing
>a good idea
anyone who believed this pile of buzzwords would be functional is retarded or incredibly ignorant about ray-tracing

The point of RT core is to run concurrently with the normal compute cores. Turing can trace twice as many rays as Vega and still have a whole GPU to run the rest of the game instead of 100% occupancy.

Attached: NVIDIA GEFORCE RTX TECHNOLOGY.jpg (2560x1440, 840K)

in theory, but what really happens is that your GPU catches fire and melts
so from an actual reality point of view, no, RTX doesn't achieve that performance over vega

Right looks like shit compared to left

How much of a difference would it make to have 2 or 3 samples per pixel than just one? I know that'd require 2x, 3x the raytracing cores, but would it make a difference?

the minimum number of samples is actually a function of the geometry's narrowness

Without a dedicated accelerator ray tracing is too taxing. Its not something that can be half assed with low precision and low passes.
Losing that much performance just for some reflections you'd never notices in the heat of game play isn't something anyone would actually do. Its a gimmick. Its something a consumer tries and then turns off, because the smooth frame rate matters more.

For the discrete GPU market ray tracing is still off. I really wish someone would have licensed ImaginationTech's ray tracing ASIC design.

Half the perf of what we have seen in rtx currently
Turing is just shit at rt in games
I'd love to see 7nm vega and Arcturus let loose on this kinda work but amd said they cbf
Both look shit the left is blurry grainy shit which df/gn aid is missing effects in the reflections just like classic ssr and render to texture like in hitman 2018 lol

Turing runs cooler on BF5 with RTX than without because of the low cache hit rate and warp occupancy of shading the ray traced samples.

It runs cooler because the RT cores are bottlenecking the CUDA cores.

>Dice also fucked up and didn't enable it for all reflective surfaces
so you want even less frames?

This.
Hyperthetically even if 7nm turing has 4x the rt perf there would still be a bottleneck? Is there a way to overclock the rt core portion of the card? Anyone looked into that or is it just fixed function/clockrate
Well it's not even used AT ALL for some maps
What's the fucking point of enabling it at all.

RT cores run concurrently. Unless you think memory bandwidth or integer compute is the limiting factor. I doubt both of those. Bad cache coherence and warp occupancy is a problem for any GPU shading sparse samples.

>Speaking of id doesn't vulkan have rt functions now
Yes. Gaijin are using it in their new game for the global GI, basically VXGI. Runs amazingly well, getting like 100+ fps at 4k on the 2080Ti and still running well on Vega and 1080ti.
youtu.be/AvXra6yqEZs?t=152
Actually this might be running on V100, I don't remember.
>Doom Eternal
Probably not. It's completely unnecessary for the lighting in the game and how good it looks with raster graphics running amazingly.
They surely DO ray trace already in the engine, it's just pre-baked ray tracing like in Unity.

>i'm talking about pure rasterization performance where the RT cores can be used to offload certain tasks that can be solved with the AI to increase performance.
This has largely panned out to be bullshit.
DLSS looks similar to 1800p upscaled to 2160p.
It runs about the same as 1800p as well.
You're better off just running the game at 1800p instead of adding frame delay with the tensor cores.

The Quadro RTX cards definitely have some use in workstations, but for games the "RTX" parts of the architecture are COMPLETELY useless.

Both look pretty bad, but I'll take the SSR.
If you want good reflections running well, look at the new Hitman and Gears of War. They use a combination of methods.

Denoising does work as a concept. It's just you still need at least 10 rays per pixel to start with and more performance overall at all parts of the GPU.
The problem of RTX as a concept isn't exactly flawed (except maybe the tensor cores... you can denoise on generic compute fine), it's just not nearly powerful enough.

Attached: firefox_2018-11-20_10-27-18.jpg (2039x1054, 178K)

Based knowledgeable poster.
Can't wait for this shit to depreciate like direct x of old
At best its tacked onto engines and apis now we won't see ground up stuff till dx13 and vulkan 2

Yes. 4x isn't enough.
The bottleneck is in SHADING the rays.
The RT ASIC detects the ray-triangle intersections fairly quick, at up to 10 gigarays/s apparently on the 2080ti. But by the time you run a shader function on each intersection, that's dropped to 2.5-3.5 gigarays/s.
You need double the compute.
You need even more performance from the RT ASIC.
You, and I'm not sure about this part and I'm just assuming here: need lower latency between detecting the intersection and running the shader
So just the die shrink which presumably will come next year for an overall 25-35% performance increase isn't going to be enough. It'll be at least 2 years before we start seeing real time path tracing without awful artifacting. Probably more like 6 years.

I wonder if the RT ASIC are even in each shading module of the architecture, or if it is its own separate thing. We haven't seen a proper die breakdown of turing, as far as I'm aware.
I imagine when AMD adds their own RT acceleration, it's going to be part of each CU and share the cache with that CU.

I wasn't talking about dlss there you mongoloid gnome

Sounds like it's nvidias answer to gcn/Vega in that it's bottlenecked by its own arch lol

How would they even solve that? Besides brute force.
So why not go with a dedicated rt card? Didn't make sense back in the day but now why not? Hell even nvlink the thing

You know RT technology will change lighting in gaming forever right?

Day night cycles will look better and more natural than ever with LESS effort by the developer.

Meaning indie games can now compete among the best of them. The playing field is evening out. Games in general will look and play a lot better.

tm

>I wonder if the RT ASIC are even in each shading module of the architecture, or if it is its own separate thing.

Attached: NVIDIA-GeForce-20-Series_Official_Turing_SM.png (2560x1440, 735K)

>Buy my faulty GPUs!
>t. Jensen
you are pathetic

Ray tracing isn't going to fix dark scenes, that limitation is fundamentally always going to be your display. Even when panels claim infinite contrast ratio they still lack the granularity between dark and light segments. They can't replicate what a moonlit night should look like to the eye, no matter what the GPU can render.

Raytracing isn't magic, and its going to take years to mature to the point that its even half way usable. Right now its only being used for a couple laughable reflections per scene, and its crashing frame rates by 50% or more.
And go fuck yourself for that reddit spacing.

you are very stupid he is talking about global illumination

>So why not go with a dedicated rt card?
Latency.

I really really really hope you're pretending to be a single cell organism...

Meaningless buzzword. Ray raced lighting doesn't make a convincing night scene. The limiting factor is your display, gamma, its contrast ratio, its color space to reduce banding.
The issues you dipshits are talking about are not being held up by your GPU, the engines game use, or anything else on the software side. The issue lies between your monitor and your eyeballs.

you poorfags are still butthurt because you couldn't get one?
nvidia always launches their new cards with some new bullshit technology that cripples FPS and doesn't look that good.
>tesselation
>hairworks
>RTX
I didn't get a 980ti for tesselation, or a 1080ti for hairworks, I got them all for the better raw power.
And that's why I got a 2080Ti, 4K runs great on it, and I'm happy. My 2x1080Ti didn't run better BTW, so stop with the "1080Ti is enough" bullshit.

Attached: 1514474423123.jpg (480x482, 51K)

Is a photograph taken at night convincing enough for you? How about a movie with a scene set at night? If your computer could render that in real time would you not consider that good?

Movies aren't actually filmed at night with ambient lighting, you total retard. They're filmed with massive studio lamps, and lighting is corrected in post processing, AND STILL movies with night scenes look nothing like the real thing. Even here there is still, on top of everything else, the issue of the display only poorly replicating the source material.

You're too dimwitted to understand what this issue actually is.

So you're saying a perfect real time imitation of an image taken at night or a movie set at night is still not sufficient? Do you not think that would be an improvement over current rendering? Do you think every nighttime photograph or movie set at night is ugly? I don't think anyone else would agree with you on that.

oy vey quit kvetching just buy it :^)

These diagrams aren't accurate. I mean an actual analysis of the die under a microscope.

But yeah, that would imply its separate.
Does it at least have a direct connection to each shader module's register file?

DLSS is an example of the "AI cores" usage and shows that, so far, every attempt to implement them in games is merely equal to general compute usage. They are worse than the generic compute units of Vega.

>So why not go with a dedicated rt card?
I just said in my post you replied to that likely part of the problem is the latency between the ray-triangle intersection detection and the shader call which follows. The bottleneck is on the shading.

Nvidia's "10 gigarays per second" is like saying 2,000,000,000 [unshaded] triangles per second. No game is going to look good just rendering 2 billion unshaded triangles each second. The looking good comes from the shading. Without shading the rays, they're not picking up, reflecting, and bleeding their light information and they don't know where to bounce to next.

>Sounds like it's nvidias answer to gcn/Vega in that it's bottlenecked by its own arch lol
Well Vega is bottlenecked by poor raster performance compared to its compute. I guess it's somewhat similar, sure.

>tessellation
Brainlet. Also Ati had tessellation first. Nvidia just made theirs better at useless x64 which looks the same as x16 and just runs worse because they are absolute scum.

... they use longer exposure times and post process editing. Things you can can do and do do without ray tracing. That's what "eye adjustment" is in almost every AAA game now days.
But you still don't get very good details in a realistic level of darkness or brightness without a 10bit monitor, which will become standard in a few years.

Attached: tesselation.png (678x400, 29K)

Er I should add if it's directly connected to the L1 of each shader module.
I'd guess so really. That's why the L1 is larger.

But it'd make more sense to me to have an RT accelerator for each shader module, where it directly dispatches to the FPUs and AGUs in its module.

Nvidia can be REALLY misleading with those diagrams to hide their trade secrets, though, which is understandable so they may do as I described already and there is no real latency issue.

Attached: uda164wy0cz11.jpg (542x500, 37K)

a proper ray tracing needs at least 1000 rays per intersected triangle

>That's why the L1 is larger.
It's not really larger, L1 and SM is just a singular configurable slab of SRAM now.

Am I the only one that thinks the demo to in-game comparison of the fire reflections in the character eyes, is actually pretty good and very accurate to the demo?

Who here also honestly believes that "its too expensive" is a valid criticism? Cost is relative, you were not expected to be buying the absolute highest spec GPU and hardware for gaming nay 5 years ago but this meme gaymer culture has every single person expecting to be able to afford the highest specification hardware.

Components manufacturers played you as consumers so hard over the last 5 years.

Oh, that seems strange. I'm not an expert on semiconductors, though.

Uhh the denoised reflections of brightness on matte or worn surfaces looks nice except that it's at like 5fps which looks bad and artifacty.
The mirror reflections look fucking awful. Did you watch the video of it in motion?

>Who here also honestly believes that "its too expensive" is a valid criticism?
I'd drop $3000 on a GPU right now if it was seemingly worth the money. I wouldn't even pay $900 for the 2080Ti, though. Besides the driver issues, I'd rather just have half the performance for $400 instead.

There is no reason that the enthusiast end should have such a markup. You aren't doubling the RAM. You aren't doubling the traces. You aren't doubling the mosfets, chokes, and other components. The GPU die itself, although the yields are lower at higher sizes, is actually not the majority of the cost. So simply doubling the size of the GPU should be giving you better price to performance, not worse.
You could try and argue that lower volume is the reason, but if that was the case there wouldn't be dozens of different models of 2080Ti AIB cards. There are more different models of 2080Ti than there are Vega 56 and 64 combined, which increases cost.

That is not a good metric. Triangle size varies. Rays/bounces per pixel makes more sense.

But that's a fucking lie you fucking kike. The game was specifically optimised for nvidia. Their own fucking site said so.

770 still gets the job done for me ok. Don't need 4k or VR and plenty of older games for me to try because I'm a slowpoke. Was thinking about going from 8 gig to 16 gig ram though, I have an i5 2500k so that's doing fine as well.

Kys they stopped being good since bf3

Yeah Nah mine started to struggle at 1080p 60fps 3 years lol
It was a good card for older games no doubt but if you have the 2gb model a fucking rx570 or 1050ti is faster
Good for you tho 16gb ram is a huge upgrade get 2133 ram tho

Right so if latency is the big killer why did they design turing in such a way that it's still a fucking huge issue as you said hardly 1/3 of thoze gigarays are actually being used so is it their measurement of grays for gaming is bullshit or is it just the arch?
My knowledge of new gpu tech like this is very limited but hopefully they will solve the latency issues like amd did with Zen 2+
Will gpus go mcm once latency is a non issue? What's holding up infinity fabric tech from being used for that with very fast low latency interconnects and vram like hbm2 and the like? I also found it fucking odd quadro rtx gets hbm2 but rtx 2080ti doesn't what the fuck is going on there? Is it that the card doesn't need hbm2 or that gddr6 is faster/cheaper/easier somehow? Pretty weird when fury/Vega have hbm and they are a whole tier below turing rtx geforce

Imho I don't see raytracing taking off for years even if hardware gets cheaper and way more powerful won't the cards still need to do raster based hybrid rendering? Or does dxr on vulkan/dx12 give devs the option just do away with raster entirely and go fully raytraced

Muh vega is literally all you say dumb amdpoorfag

Wow, who would have though that RTX(raytracing) have all the same drawbacks and issues raytracing does?
Shocking.

Literally every problem that existed in BF2, still exists.

Vega is arguably more interesting than turing and pascal tho alot of the tech amd introduced will make its way into navi and Arcturus
Rtx Dlss and all the other memes on turing flopped pricing and shit performance aside those cards are garbage
>muh amd poor fag muh muh vega
And? He was only talking about Vega because I brought it up you dumb fuck
As someone who owned a 56 and has a 1080 now Vega definitely has more legs and will age better not to mention it's 1/10th of the price of 1300usd garbage
Me
Anyone with half a brain and knowledgeable in cgi knows this shit hell nvidia had some raytracing demos all the way back in fermi/tesla times with similar issues
Pretty sure ue4 had voxel/hybrid traced effects and it got dropped in favour of rtx crap this tech is nothing new and it's hilarious to see real time graphics still hitting the same roadblocks it's been having since 2006

Yeah I'm still using 1600 and think i can go to 2200 or something like that ddr3. But it would also cost more because I'd have to replace the 1600 ram sticks too. .

Meh your main bottleneck is gonna be that gpu and CPU
Most games eat as many threads as you can throw at them currently the sweet spot is around 16+
I had a 4690 and 1600 16gb ram it was great until most games fucking raped it to 100% usage
Even 6 core 12thread CPUs can struggle depending on the game.
Sadly dx12/vulkan solves alot of the draw call issues and bottlenecks but barely anything really uses it.
Anyway getting back to rtx/dxr I can't wait until the tech is basically in every game and runs well 4k 60fps+ but it won't happen overnight I'm just so sick of nvidia fanboys who've never seen it in action and think it's flawless based on marketing bs

>Will gpus go mcm once latency is a non issue?
The problem is not latency, but interconnect bandwidth, power and the very way modern GPUs work.

>Dice also fucked up and didn't enable it for all reflective surfaces
Nah, that's actually a performance optimization. If they enabled it everywhere the performance would tank even more.

Nah df/gn pointed out there's very obvious bits of maps that should have rtx but don't like rivers under a bridge etc
Rtx ultra basically raytraces everything and dips down to 30fps or less 1080p depending on the scene.
Df admitted dices current implementation was broken by their own devs admission lol so perf will improve
Same was said about amds bulldozer and CPUs 7 years ago and look at them now
It will happen it's just a question of when we can't keep throwing bigger hotter less yielding dies at the problem

Yeah if/when I get real money I'll eventually get to that point but to be honest my system now has lasted a hell of a lot longer than any other system I had in the past. But yeah like I said I don't "have" to be playing the newest of the new stuff when there are still plenty of older games. But yeah I find it so hilarious the utter fuck up these 20xx series cards have been. The last time I found something from nvidia this hilarious was the wood screws incident. Not sure if this beats that in terms of funny but it's pretty close imo.

This is arguably worse than fermi though

Nvidia completely fucked up turing when it comes to games they just are not fast enough at rt for 1080p 60fps+ and have a fuck tonne of hardware + software issues
Sure pascal Maxwell Kepler and fermi had these problems as well but I think turing will age like milk.
I feel sorry for anyone who owned a 7xx series card they got absolutely fucked so glad I offloaded mine

Again: I can't confirm that latency is an issue, I just suspect it.
And you also missed where I said that even if latency was an issue and fixed, it simply doesn't have the raw shader compute and path tracing performance to do what was advertised.

>muh liking a more interesting general purpose compute architecture over limited ASICs cobbled onto existing hardware.
>muh buying from a company whose $400 card from 2011 is still just as good as the rival company's $700 card from 2013

Turning things off which should be there isn't an optimization. It's simply lowering the graphics.
Maple Story isn't "more optimized" than Crysis just because it runs easier.

>The whole point of the gimmick is (((reflection)))
>NEVA BEEN DONE B4. THABKS BASED NVIDIA FOR BLEEDING EDGE TECH!
>yeah, they didn't fully implement it because of performance issues, AND IT'S BEAUTIFUL!!!

Oic so it's still a raw power deficit
I wonder where nvidia will go from here? Surely not another fucking die 700nm+ in size even if they went to 7nm or smaller its still a fuck huge die
Hahah yep the backpedalling is hilarious
Remember the dx11 tessellation demo's? Still barely used in games it's all snake oil

>Remember the dx11 tessellation demo's? Still barely used in games it's all snake oil
It's used. BF (both bfront and bfield) are using it. Probably cod too. Surely RDR2 as well.