Nvidia make RTX obsolete

so now that ray tracing works on non-RTX Nvidia cards, have Nvidia basically made RTX cards obsolete?
I've tried DXR on my GTX 1080 and at 1080p the performance hit isnt that bad.

Attached: RTX BTFO.jpg (1227x682, 53K)

Other urls found in this thread:

techpowerup.com/reviews/MSI/GeForce_GTX_1660_Gaming_X/28.html
twitter.com/AnonBabble

rtx cards have a raytracing core, so they should theoretically run smoother(still like shit)

Attached: Nvidias-RTX-Architecture-as-extended-with-the-introduction-of-Turing-and-new-types-of-cores.png (1023x432, 20K)

they literally only enabled it to make RTX cards look better.

wow! it's almost as if dedicated hardware runs better than generic hardware!

Attached: file.png (925x429, 533K)

>raytracing core
yes goyim, it's a """ray"tracing" "core""

my Radeon HD 6570 can raytrace. why should i buy an RTX?

The Geforce RTC 2080 etc is the S3 Virge of raytracing.
It's innovative, it is compatible with the newest Direct3D but it's not quite actually fast at it, and if a "3Dfx of raytracing" gets released, the 2080 will be too slow to run the popular games.

My trident 512KB can run 3D games, why should i buy an 3D accelerator?

>RT Core
>CUDA
>Tensor Core
It's the era before unified shader units all over again, and then just like now prices were high and performance was plateauing. Unified shaders brought major performance boosts and lower costs. When are they gonna unify all this crap again?

ray tracing is a computing algorithm.
RTX is just fp16 alus on the side of the gpu that are designed by google to run shit with intense FLOPs.
novidia marketed it as "ar teehee ex" dedicated ray tracing, people bought the scum that ray tracing h/w exist, and then wasted their money on novidia for another iteration of meaningless useless tech.
once you port the algorithm to something that anyone else can use, e.g. opencl or vulkan, or d3d makes it mandatory to have such api calls, games will oblige and then you can use any FP unit for that reason.
You could do it before too, but you had to write the shaders yourself. now that it's an api call or it's a library, it's either in the driver or implemented by the engine.
...and that's how the rtx scum works. novidiots haven't figured it out, yet.

i did because most of my 3d software refuses to run at all unless my hardware supports "direct X".

>When are they gonna unify all this crap again
when novidia figures no new way of milking the idiots.
do you remember how long they milked the same pipeline with the "dx11 is enough for everyone" meme? it took them 4 years to update the barelyworks to dx12.
do you remember how long they milked the physx meme?
do you remember how long they milked the titan scum?
do you remember how long they milked their dual-gpu incompetence?
do you remember how long they milked 3d glasses and 3d displays?
do you remember how long they milked the subpixel anti-aliasing fiasco?
they are still milking the 3GB vram is enough for mid tier and 6gb is enough for performance tier.

Yes, several years later when developers started to drop support to software rendering because there were plenty cheap and really powerful 3D videocards on the market.
Right now, raytrace acceleration is not very good or have plenty of options, but i imagine it will get to a point where an integrated intel video chip will be able to do hardware raytracing faster than the 2080.

>the performance hit isnt that bad.
>30% of the fps
So I guess if you would have over 200fps on the RTX equivalent, DXR on a non-RTX card wouldn't be so bad.

>Right now, raytrace acceleration is not very good or have plenty of options, but i imagine it will get to a point where an integrated intel video chip will be able to do hardware raytracing faster than the 2080.
ignorance is bliss.
the same shit intel said 15 years ago about their future 10GHz CPUs, yet those idiots could not imagine that there are limitations due to physics.

I kinda wanna try out 3D Vision actually after finding out there's a community that manages implementation and optimization for over 800 games. Fuck VR goggles.

Attached: images.jpg (226x223, 8K)

Raytrace is one of those things you can easily parallelize.
Just pile up enough RT cores to have a good performance and bob's your uncle.
And by pile up i mean literally manufacturing an standard small RT core chip and place em on top of each other.
It's not like with CPUs where you run into hard limits of how much you can multithread a program.

I was going to buy a new videocard anyways. All these salty faggots that spent their life savings on 1080/1080ti's and still too poor to afford to move on need to justify keeping it for another ten years. Based Nvidia for pissing off these poor fucks.

GeforceMainLoop() {
If(cardid

Attached: GTAPEP.jpg (400x560, 187K)

>It's not like with CPUs where you run into hard limits of how much you can multithread a program.
well, this shows how much of a meme-faggot you are.
first, there is no such thing as RT cores, those are plain ALUs.
secondly, GPUs have hard limits on how much you can divide the algorithm, it changes from iteration to iteration, the upper limit for most novidia gpus is 32-threads per kernel.

this is like
if ( genuineINTEL ) {
use SSE;
}
else {
use x87;
}

The RT core just has dithering algorithms, the important part is the tensor core which handles the actual calculations for ray tracing.

The RT core handles calculations for BVH traversal. The tensor core handles calculations for neural nets.

Actually they showed everyone why RTX cards cost so much because even 2060 performs better than 1080Ti.

Is there any hope that AMD cards will be able to run current Ray tracing supported games?
Like physx and hairworks too work on AMD hardware even if they run like shit but they still work and give everyone some idea.

dedicated RT processor when?

As long as they use DXR, then AMD just needs to provide a DXR implementation in their driver.

It literally is

Because you might want more than 2 fps

So they need to work with Microsoft?
I think they would have tried and found out it runs like shit even on vega64 so they are not even bothering.

AMD made nvidia obsolete years ago ;)

We're comparing average ray tracing performance, not space heater simulations.

> 3D Vision
There's AMD HD3D. Difference is, AMD supported passive 3D (progressive screens output interlaced images on odd and even lines), Nvidia supported active 3D with glasses which flap very fast. You may still find LG 3D TVs, but I doubt you'll find the whole Nvidia setup for cheap.

>3 GB mid tier
>6 GB performance tier
3 GB is budget, 6 GB is mid. You go up to 2070 for performance tier.

With AMD you get both with the added benefit of value

>value
Until the electric bill comes

>RTX: ON
>FPS: OFF

Never, the raytracing cores are fundamentally different and are super wide computing units onto themselves.

Tensor cores straight up arent used for anything in games except DLSS which is just fake supersampling. Even the denoising is done by a conventional temporal denoising (aka photoshop) method on the cuda/sm cores. It's well documented.

They need to implement dxr drivers, but it wont be a good experience until AMD puts physical accelerators on their GPUs. RT cores have like an order higher throughput for BVH over the CUDA cores

why not just have a separate ray tracing card a la physx? why do we have to spend extra money on shit like the 2080ti that has like a ~10% improvement over the 1080 ti?

They're part of the shader pipe. A separate card would have a lot of latency

>RAM bus speed without timings

>so now that ray tracing works on non-RTX Nvidia cards,
Raytracing is older than Nvidia.

Raytracing is older than rasterization

>have the slower cards made the faster cards obsolete?
it has never worked like that.

"We know how frustrating it is when you have to pay a premium for extra 3D glasses that will become paperweights should your screen die. So we've introduced our kit, where you can swap screens as much as you want and use the same 3D glasses! It costs only as much as a new 3D screen (special 3D screen equipped with our licensed technology not included*)"

>and at 1080p the performance hit isnt that bad.
lol

>dedicated hardware
Yeah and how it ended with g-sync?

Yeah because you're judging it based on direct x. If it was vulkan the difference would be lightyears

Vulkan can't make raytracing cores appear from nowhere.
It's still a gimmick, just like 3D acceleration was at the time of the S3 Virge.
The 2D performance of the S3 was just fucking awesome, but the 3D was a gimmick.

Freesync was always going to win simply because of the cost involved for gsync modules. My instict is Nvidia thinks Intel will support freesync at some not too distant point in the future which WILL kill off the uses for gsync out of the really expensive screens.

You didn't get the point

objectively superior to freesync?

>being this autismo
The fuck dude, the point was to highlight that it won't be the first time in which nvidia makes useless hardware's solutions. Dedicated hardware is cool n shiet but sometimes a clever software implementation totally does the trick, at which point the hardware solution becomes redundant, pricey and even obsolete in some aspects.

it's not useless if it's the best solution.

Not the same guy but only if people are willing to pay for it. The consumer decides what's best

>raytracing core
LMAO

It depends on what a big deal really is the problem that it is trying to solve. That's why you don't buy a fucking shotgun to kill rats.

Actually a 1080ti signifigantly outperforms a 2060.

somehow you made that stream of text unpalatable

what is gsync laptop?

jfyi rtx scum works processing like ~40tflops of what fp units can do, and there's no card with 40tflops out there so, it really works but is still not enough for having high quality ray tracing on real time at all.

That's an unjewy amount of quotes.

no go ahead and try ray tracing on non RTX cards its barely playable

The best looking raytracing title (minecraft mod) can run at 720p/30 FPS on non-RTX cards.
It actually don't even use the RTX at all.

>720p/30FPS
>playable

If i tell you that on the last generation, people played games at 600p/25 FPS, you would believe me?

no

Well, people did, specially sony fanboys.

So what happens to traditional graphics when all gpus in the future are designed for ray tracing with specific hardware? Will performance suffer in older titles?

For a single guy's mod on the fucking java version it's impressive. I imagine if it could be implemented into the better running and more modern Bedrock edition it could realistically target 1080p/60 no problem on modest non-RTX hardware.

If we consider raytrace the next step on video cards, i would say that no.
2D acceleration never actually got bad because games jumped to 3D, but video chips don't actually try to deliver a much better performance than the 2D applications need.

2060 is not budget, it's mid-tier

AMD didn't even have a 30-120hz freesync monitor that worked without breaking/blacking out for 2 years. G-sync enjoyed 3 years of absolute market dominance and nvidia's been enjoying 60% product margins on everything they do, while AMD was in the red year after year until just recently. You seriously expect me to believe freesync won? The software based gsync that works with freesync monitors is just the nail in RTG's coffin. There's 0 reason to buy AMD now

Why broadcast that you're an idiot?

The 3 GB version is

>rx580 still beats 1660 in everything but power consumption
correction: there's 0 reason to buy AMD unless you're spending less than $600 on a build

When cards starting doing INT32, FP32, FP64 and FP16 at the same cycle.
Every DX12 capable card can do DXR, The Raytracing path is Int32 and everything else is FP32, but cards cand do Int32 and FP32 at the same time, what the RTcores does is the INT32 so the pipeline is much better.

Thats a cunt move.

>fake supersample
It is more like a shitty TAA.

It depends on the games and what's on screen at the time. BFV isn't too bad, whereas Metro is a catastrophe on Pascal. And the frame dips are very much related to what you're actually seeing. Moving into an area with raytracing cuts the frames noticeably.
You basically need a Titan Xp just for 1080p and even then the experience isn't consistent. The frames are wild. So the RT cores range from "pretty useful" to "necessary" depending on the situation.

>get ready
for what? the same fucking products in a new box. AMD has dumb as rocks marketing dept.

Attached: hgfxqy2wfwt21.png (938x563, 349K)

DXR is part of DirectX 12. (specifically its some new structures, pipeline, a method to actually generate the rays, and some HLSL shader types.)

DirectX itself is just a hardware agnostic API that will run on ANY hardware that the vendors provide an implementation to. If you want to say something is obsolete you would have to actually benchmark: 1) the actual DirectX implementations 2) the delta in performance based on the ASICs in the GPU card you actually buy. (eg. what are these *cores* actually doing)

techpowerup.com/reviews/MSI/GeForce_GTX_1660_Gaming_X/28.html

???

Attached: relative-performance_1920-1080.png (500x850, 50K)

What's the last "safe" driver for a 10 series?

Attached: 1546826451562.jpg (846x1200, 169K)

>so now that ray tracing works on non-RTX Nvidia cards
Ray tracing always worked on all gpus. Difference is rtx had specialised hardware.

1 Terahertz Indium Antimonide?

that's some strong delusion you have going on there.

if somethings happens to my current 144hz monitor, my next one will absolutely include gsync.

no, rtx was already obsolete

the performance hit is god awful