RTX a potential flop

I have a hypothesis that the improvement in regular benchmarks between 20 series and 10 series will be marginal. Ray Tracing was developed for a while and slapped on in the last few years as a gimmick/experiment, while it may be the future in 5-10 years, it will take a long time to be refined.

I predict that RTX will flop under the following conditions:

> If the RTX 2070 performance doesn't beat out the GTX 1080
> If the RTX 2080 performance doesn't beat out the GTX 1080 Ti

Ray Tracing will also probably have many bugs and issues in the initial few years, competitive/multiplayer gamers will be first to disable it even if available. It will mostly be for cinematic type quality on some single player titles.

This would mean that fewer gamers will buy the 20 series and the 10 series may see discounts and more buyers due to the current overtstock.

But I guess we will soon see with benchmarks.

Attached: nvidia-turing.png (1214x420, 412K)

Other urls found in this thread:

blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/
youtube.com/watch?v=mtHDSG2wNho
tsmc.com/english/dedicatedFoundry/technology/16nm.htm
anandtech.com/show/13249/nvidia-announces-geforce-rtx-20-series-rtx-2080-ti-2080-2070/2
twitter.com/SFWRedditGifs

>AMDead is this desperate
pathetic

The RTX 2070 will beat the GTX 1080 by 5%, maybe 10%.

Same for the RTX 2080 and the 1080Ti. Though we might see this stretch as far as 15-20%.

>I have a hypothesis
pls kill yourself

2070 is 15% slower than 1080 and 2080 10% slower than 1080 Ti judging by number of CUDA cores and boost clock, which have historically translated well into benchmark performance.

RTX 2070 already mentioned to be faster than Titan Xp

Keep trying to FUD though

2070 is so far behind 1080 it isn't even funny.
2070: 2304 CUDA cores * 1620MHz boost clock * 2 = 7.465 TFLOPS
1080: 2560 CUDA cores * 1733MHz boost clock * 2 = 8.872 TFLOPS
2080 Ti, despite its price is the only competitive product among the released three, as it's almost 20% faster than 1080 Ti

Even 2080 is still slower than Xp. Though 2080 Ti founders edition is only 5% slower than Titan V.

How so?
GTX 1080 had 2560 cuda cores at 14nm
RTX 2070 has 2304 cuda cores at 12nm

GTX 1080 is 16nm, RTX 2070 is 12nm, that WILL make a difference.

What are you gonna do?
Buy an AMD GPU?
:^)

Number of cores and their frequency are enough to determine performance. It has nothing to do with 16nm vs 12nm.

how much of the chip is the ray tracing module? its way bigger despite being a smaller process. i wonder how much the ray tracing module adds to the price.

Jen-Hsun himself said in UE4 Infiltrator 4K, GTX 1080 Ti only gets 30FPS while RTX 2080 Ti gets more than DOUBLE the FPS

You dumb faggots should know by now TFLOPs rating is MEANINGLESS

AMD claims over 12TFLOPS but Pooga can't even outperform the GTX 1080

bullshit.

GTX 980Ti has 2816 Cuda cores, think it will be better than the RTX 2070 too?

I have 12 Nvidia GPUs across all my workstations. 4 of them Titan Vs.

2070, even the founders edition, clocks slower than 1080.

Just ignore the retard. We had threads of some idiot claiming vega did 200+fps in Prey based on analysis of youtube video frames. Surprisingly he vanished when benchmarks came in.

Nope, I have an NVIDIA card atm, don't care about companies only about the product.

980 Ti is clocked at 1202MHz while 2070 at 1620/1710. 1080 is at 1733MHz and has more CUDA cores than 2070.

by leatherman
He said it's faster in gigarays
Doesn't mean shit in real world applications

It's already a flop because it costs much more than previous generation. I predict 2070 to be at 1070Ti levels at best in games without RTX meme enabled. They didn't show ANY normal benchmarks/comparisons except for some weird RTX-OPs numbers for a reason.

few factors to consider
IPC almost certainly won't drop probably will rise
448 vs 352 GB/s bandwidth
Titan V already has better vulkan and async performance
possibly fatter cache to support RT core and concurrent int/fp
lower on chip latency on 12nm vs 16nm
concurrent int/fp (might not matter in graphics + won't matter in current games)

>4 Titan Vs
post proof
Also why would you care if the Titan V is allready shitting on literally anything RTX

IPC/cache/latency won't matter in parallel processing because the latency is hidden

I care about Nvidia

Attached: Untitled.png (1857x736, 159K)

My main question is why they would have such a huge bandwidth increase without the flops to use it.

Is your CPU okay?

It's useful in quite a few games and for mining certain crypto.

Because of Ray Tracing?

>IPC ... won't matter in parallel processing because the latency is hidden

IPC is not a factor in the latency.

You using this as a little render farm?

>Surprisingly he vanished when benchmarks came in.

I'm sure he's still here, shilling more retarded AMD bullshit.

>he thinks smaller die size means better performance
kek

Google latency hiding. When core utilization is at 100% IPC/cache/latency will stop being the bottleneck.

Yes, it just reports the wrong value

It's for machine learning.

Do you do deep learning research?

Neat-o
What sort of work do you do?

I do.

It's the same 16nm process on it's 4th iteration, slightly improved again. There's no density improvement.

proof?

Might be true for Volta, but has anyone actually had a good look at these cores yet?

Can I ask why you use an AMD cpu instead of intel (for deep learning)?

More random questions:
What will everyone be using the integer performance for? Is there any use in graphics?
Will the overclocking headroom be better or worse than pascal? Bringing a 2080ti to 2ghz would be very impressive.

Are you fucking retarded? TSMC 16nm has been renamed to 12nm with the normal leakage improvements as a process matures. It's the same shit. You may clock slightly higher but there's no density improvement. You're talking about possible architectural changes but that has nothing to do with the process.

Index array on GPU computing this is amazing innovation Volta SM.

My last one is Intel. Training is run on GPU only; only data preprocessing/generating training data on the run is done on CPU

Attached: Untitled.png (498x458, 30K)

How will it flop? There is no competition. AMD won't be releasing a consumer GPU until next year, and Intel's is at least 2 years away. They've been saying they are limiting production because they know the mining market isn't sustainable. They even held off on the release to sell off remaining pascal cards.

They can do whatever they want right now.

Fewer people will buy 20 series and will stick to overstock of 10 series, due to prices and performance (if it turns out true) or will simply wait another 2 years.

Yes I agree for high end AMD doesn't make much sense.

It's all speculation at this point until benchmarks and reviews are out. But notice no benchmarks released during presentation and the main focus was a lecture on how big of a leap it is in ray tracing. General public doesn't care that much about it, in my opinion. Soft shadows and reflections are nice, but so are physix and hair works but are not a necessity for most.

BTW in general some level of ray tracing could be done 10 years ago. Quake Wars Ray Traced showed us that with a working sample, if it was so revolutionary, few if any devs actually implemented it. I don't think it was really due to lack of hardware, I think they could of done it on certain textures using various techniques compatible with non-dedicated rt hardware. It's just that high fidelity can be done without it using various tricks.

the most popular card on steam right now is the 1060, which was, what, 300 bucks at launch?. Also most games are meant to run fine on even lower hardware, I'm talking 1050 (non ti).
Novidya niggas need to throw a bone in that price range (300USD). Cards that go above 500USD just don't sell that much, today's thingy was cool and all but those cards are, as always, luxury products.
I'm waiting to see what they'll do in the area of the market that actually moves plenty of cards. If they get something with 8GB of RAM that will kill the 1060/Rx580 for the same MSRP, I'm in. If not, I'm not buying shit.

Another possibility is Intel releases something for cheap in the 1060/1070 tier in 2020, I'm talking about $100-$150 range that would be the perfect card for all the casuals.

The majority of games are still targeting 480/580/1060 mid/high tier performance at 1080p still and it will continue for the next few years.

>the man trying to sell me stuff said so

I braved myself into enemy territory in order to bring this back to you guise:
Core count x 2 floating point operations per second x boost clock / 1,000,000 = TFLOPs

Founder's Edition RTX 20 series cards:

RTX 2080Ti: 4352 x 2 x 1635MHz = 14.23 TFLOPs
RTX 2080: 2944 x 2 x 1800MHz = 10.59 TFLOPs
RTX 2070: 2304 x 2 x 1710MHz = 7.87 TFLOPs
Reference Spec RTX 20 series cards:

RTX 2080Ti: 4352 x 2 x 1545MHz = 13.44 TFLOPs
RTX 2080: 2944 x 2 x 1710MHz = 10.06 TFLOPs
RTX 2070: 2304 x 2 x 1620MHz = 7.46 TFLOPs
Pascal

GTX 1080Ti: 3584 x 2 x 1582MHz = 11.33 TFLOPs
GTX 1080: 2560 x 2 x 1733MHz = 8.87 TFLOPs
GTX 1070: 1920 x 2 x 1683MHz = 6.46 TFLOPs
Some AMD cards for comparison:

RX Vega 64: 4096 x 2 x 1536MHz = 12.58 TFLOPs
RX Vega 56: 3584 x 2 x 1474MHz = 10.56 TFLOPs
RX 580: 2304 x 2 x 1340MHz = 6.17 TFLOPs
RX 480: 2304 x 2 x 1266MHz = 5.83 TFLOPs
How much faster from 10 series to 20 series, in TFLOPs:

GTX 1070 to RTX 2070 Ref: 15.47%
GTX 1070 to RTX 2070 FE: 21.82%
GTX 1080 to RTX 2080 Ref: 13.41%
GTX 1080 to RTX 2080 FE: 19.39%
GTX 1080Ti to RTX 2080Ti Ref: 18.62%
GTX 1080Ti to RTX 2080Ti FE: 25.59%

Actually I do think that Vega will find a renewed competition against Pascal in the Ray Tracing territory. Compute is Vega's sole advantage over Pascal and DXR relies heavily on it.

blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/
>You may have noticed that DXR does not introduce a new GPU engine to go alongside DX12’s existing Graphics and Compute engines. This is intentional – DXR workloads can be run on either of DX12’s existing engines. The primary reason for this is that, fundamentally, DXR is a compute-like workload. It does not require complex state such as output merger blend modes or input assembler vertex layouts. A secondary reason, however, is that representing DXR as a compute-like workload is aligned to what we see as the future of graphics, namely that hardware will be increasingly general-purpose, and eventually most fixed-function units will be replaced by HLSL code. The design of the raytracing pipeline state exemplifies this shift through its name and design in the API. With DX12, the traditional approach would have been to create a new CreateRaytracingPipelineState method. Instead, we decided to go with a much more generic and flexible CreateStateObject method. It is designed to be adaptable so that in addition to Raytracing, it can eventually be used to create Graphics and Compute pipeline states, as well as any future pipeline designs.

Against the new RTX though? \probably gonna lose, but not as hard as nvidiots are expecting it to.

Your comparison is probably no that far off, but TFLOPs don't mean all that much anymore due to various optimizations.

We saw that with 480 and 1060, 1060 generally performed better despite being a weaker card.

In fact entire AMD lineup didn't scale up all that well when compared to NVIDIA and pure TFLOPs.

Pooga can only do 300M rays a sec

Turing does 10 BILLION rays a sec

33 times slower than Turing, the gulf is so massive and it will only increase further as Nvidia moves towards 10 TRILLION rays a sec goal

your meth is sound.
indeed.
consider to lay off the pipe.

people who actually believe they been working on this for 10 years are idiots


they have worked on this technology in the last 2 years if that.

that wasn't mine.
quoting from. well, you know.

KILL YOURSELF NOW, FAGGOT

POOGA IS GARBAGE AT RAY TRACING

Attached: Pooga.png (1920x1080, 273K)

They probably ran the quake wars ray tracing demo 10 years ago and consider it dev work.

youtube.com/watch?v=mtHDSG2wNho

Are you creating an ex machina sex bot?

Attached: 1df2573555.png (627x519, 103K)

Attached: dd84031ebd.png (633x329, 80K)

And the 580 was full silicon, the recent x80s are essentially what the x70 would be in 2011

pretty much this.

Attached: 1534798479981.png (825x981, 346K)

3B vs 18B transistor count

>pound value plummets because brexit
>dollar inflation
wooow it's more expensive

8gbps gddr5x vs gddr6 at 14gbps. Yeah rtx is gonna kill 1000 series

full fat flagship die xxx102 vs cutdown 3rd in line from the mid-tier of dies, the xxx104 + sequential price increase, while you shout THANK YOU BASED NVIDIA along the entire ride, shouting even louder with incremental price increase paired with a sku stack downgrade.

Besides, this is technically a paper launch.

let's all take a deep breath and laugh at the retards that actually preordered this
>t. 1080ti owner bought new at lowest possible price before mining price hikes

>proof
On TSMC's site 16 and 12nm are listed as the same, take a look yourself
tsmc.com/english/dedicatedFoundry/technology/16nm.htm

Node size makes no difference except to power draw and frequency, and transistor density. Higher transistor density means you can fit more on the same chip, we know the 2070 has less CUDA cores than the 1080, so that doesn't factor in. We know what the clocks are, the 2070 is clocked lower than the 1080. The only thing left is power draw, which isn't really relevant for the average user.

I really wanna see the core diagram breakdown for this gpu. I'm suspecting nvidia might be double dipping the RT cores and cuda core count.

>No HDMI 2.1
>No HDMI 2.1
>No HDMI 2.1
>No HDMI 2.1
>No HDMI 2.1

Nvidia wants to gaurantee that your only options for a gaming TV are their "BFGDs" instead of all the 4k120hz TVs coming out next year from literally every manufaturer.

for the*

Attached: 206.gif (291x386, 607K)

They're going to milk the shit out of HPC people and HEDT/enthusiast kids, seems like Nvidia will net shitloads of profit unless AMD suddenly makes ROCm really good and shits out really cheap Navi's
TFLOPs are a pretty good measure when architectural changes are minimal or nonexistent, and when memory bandwidth/latency isn't bottlenecking you
noice

IDK considering the sheer amount of Cuda cores I'm prone to sell my both 1080s for a 2080ti, dispite the price.
Sli those days is mostly a miss, even games with sli support is getting less than 50% scaling. If with 2080ti I can land 60-70% performance over 1080 I'm getting one.

>he fell for the SLI meme
That's probably not a bad decision though, if you can get ~$300-400 for each GPU.

@adored
Go off pajet

I would consider a 1080 Ti. I had a 1070 first, it was a day/night difference. I may buy the new Ti if I have money over and get at least 50 % extra performance, but I first wait for the actual prices after release. Maybe they drop a bit.

When o brought the second 1080 there wasn't 1080ti and still, I never sold both for a 1080ti because for most games I was getting better perf than a single 1080ti. The problem is more and more recent games just don't have sli or do a very shitty sli support. With 160-170% performance of a single 1080 I use less energy and can ramp up perf on games I can't get sli to work properly.
Theorically 2x 1080s are 5000 Cuda, but I can't get anywhere near 80% scale, with notable exception of good scaling engines like Hitman.

30% increase of performance isn't good enough changing from 1080 to 1080ti when a good deal of games still allow me to get 50% scale. This was the only reason I never bothered with 1080ti.
I'm still waiting for bench's, but considering the numbers already show for Titan v with similar Cuda cores, it's probably titanv binned cores goes for 2080ti, so it's probably 70% over 1080 performance

Depends on the resolution. In 1080p it can only be 10-20 %, in 4K its more like 50. I wonder how the 2080 Ti performs in 4K. But its likely to be out of a reasonable price/performance ratio. And OC. If the new Ti does 2.5 GHz it may be better, but its clocks are lower than the 1080 Tis. I run mine at 2000/6150.

They don't need to do shit as they have. no real competition. Poolaris and Pooga are both worse performance/watt and perf/€. They can continue jewing the fuck out of everybody until AMD unfucks their GPU game or lolIntel actually manages to release something resembling a consumer GPU.

> I wonder how the 2080 Ti performs in 4K
same here.
If there were a generational leap for driving 4k resolutions I'm sure they would have at least added some footprint and off hand comments inbetween the awesomeness of all things ray tracing. No mention of it, no benchmarking whatsoever in tangible metrics/games that we could infere overall perf increase does not bode well.

I just use 1440p, 4k is still a meme and I think it's much cheaper to run 1440p @ 120-140 hz.
4k 100+ hz displays are insane, too much pricey. If I can drive most games at 1440p ate least 120 hz I'm getting 2080ti

Don't believe random youtube (((reviewers))) regarding vega. My testing shows is squarely between a 1080 and 1080ti. It starts getting within 10% of a 1080ti when undervolted, overclocked, and power target is maxed out.

The asus strix model is obviously a dumpster fire probably at nvidia's request to keep it from competing with the 1080 strix. Isn't it curious how that's the only vega most (((reviewers))) will touch?

Attached: nitroplus64large.png (1024x1024, 740K)

They showed that benchmark of the soldiers and Jen said 1080 ti runs 38 for and it was outpiuting 68 so almost 50% increase if confirmed.
I will only decide when real Benchs arise

that's pretty and all but what's the point if you can't actually buy one, they're gone

It's the infiltrator tech demo.
It was released in 2013 but it's also constantly updated with the newest stuff for the unreal engine. So, not sure if it had Ray tracing already or not. Most likely it did.

>uses TFLOPS to compare cards with similar architectures, particularly with shader core design
>"but TFLOPs don't mean all that much anymore due to various optimizations."
>proceeds to compare NVIDIA cards vs AMD cards with completely different architectures
brainlet

> I wonder how the 2080 Ti performs in 4K
Look at benchmarks for the Titan V

oh, and he said 78 but the display was v-sync'ed and locked to 60hz. The fps remained at 59.9/60, unless it had a hick up when I wasn't paying attention.

>this amd shills respond
holy shit

Oh yeah. Didn't they have full 64-bit float perf?

it's weird they didn't say a word about performance compared to previous gen, except RTX OPS and PEDOFLOSP of course

anandtech.com/show/13249/nvidia-announces-geforce-rtx-20-series-rtx-2080-ti-2080-2070/2

>At any rate, with NVIDIA having changed the SM for Turing as much as they have versus Pascal, I don’t believe FLOPS alone is an accurate proxy for performance in current games. It’s almost certain that NVIDIA has been able to improve their SM efficiency, especially judging from what we’ve seen thus far with the Titan V. So in that respect this launch is similar to the Maxwell launch in that the raw specifications can be deceiving, and that it’s possible to lose FLOPS and still gain performance.

IGNORANT RETARDS TRYING TO USE TFLOPS AS PERFORMANCE MEASURE BTFO

I cannot wait to see those performance per dollar benches.

why are you speculating on this when you dont actually know anything about the hardware

>78 FPS
>Stuck at 59 FPS
>Drops to 47 FPS during demo run
Wew lad!