Possible RTX 2080 3DMark Leak!

Beats out the 1080ti but is it the 2080ti or the 2080?

Looks like Nvidia's "50%" is looking like a lot of bullshit.

Source: 3dmark.com/compare/spy/4293425/spy/4251750

Attached: RTX2080score.png (1178x843, 91K)

Other urls found in this thread:

wccftech.com/nvidia-geforce-rtx-2080-3dmark-timespy-score-leaked-clocked-at-2ghz-and-beats-a-gtx-1080-ti-without-ai-cores/
twitter.com/NSFWRedditImage

Its 50% faster in highly specialized GPGPU workloads--which games are not.

But there's zero GPU competition against Nvidia currently, and they have an overwhelming amount of 1xxx series cards because they overbet on the cryptomemes market and fucked up. In order to not cannibalize their 10 series inventory, they are competing against themselves. And everyone in the market knows that this kind of behavior leads to unethical practices willy nilly called non-compete agreements & regional monopolies.

In Nvidia's case, they are adopting the practice of generational monopolies. 1xxx series retains its price bracket and RTX cards become ultra premium ultra price bracket; reserved for the 1 and 0.1% of the market.

But Jow Forums, /v/, reddit, everyone is the result we're here. Even when AMD/Radeon put out better GPUs that outperformed Nvidia in proven benchmarks, people STILL bought Nvidia cards. Anyone who buys these cards is a massive faggot and is contributing to the problem of non-competition, API consolidation, and bullshit like this.

Meh looks like just a regular tier downer, same with 1070/980ti, there might be ~10% more clock room.

Other than that, there might be some new "50%" on ray traced games. Still a big doubt.

Its also a 742mm^2 GPU die. Its going to run hot and consume a lot of fucking power. Big dies always do this, the laws of physics dictate it as such. Further, and this is the most important one. 2/3rd of the chip is useless in any games that doesn't use the RTX API for rendering. In other words, every game released in the last 10 years, is only going to rely on the shader cores in the GPU. This means that its entirely possible that 2/3rds of the chip will be active, draw power, and do literally nothing.

Think about that for a second, you're paying $1200 for a product where 66% of the on-chip hardware logic is not utilized. GTAV, Fallout4, Doom, Assassin's Creed, Mass Effect Andromeda, Fortnite, etc. All these games were created and released before the 2xxx series released; before the "RTX" API was released. None of these games and their render paths do ANYTHING that requires an RT core and tensor cores. Its possible that RT cores are just more shaders and they can be repurposed to do shading with some driver flags. But tensor cores are fundamentally and physically different from shaders--these are FIXED FUNCTION UNITS. THEY CANNOT DO ANY SHADING.

This means that in absolute worst case scenario, 33% of the chip is useless at $1200. That's an insane cost requirement for early adoption. That's like buying an Oculus Rift or HTC Vive and being told that half of the screen in either of your eye is disabled at all times--because all games released previous to the HMD don't natively support it. People would have lost their fucking shit--so Nvidia should not be given a pass for this. At all. If anyone who tries to defend this, is essentially defending a monopoly that hurts the consumer and allows Nvidia to release a product that would have cost ~800, for ~1200 base MSRP. $400 more because of that extra 1/3rd silicon Tensor cores.

old new

wccftech.com/nvidia-geforce-rtx-2080-3dmark-timespy-score-leaked-clocked-at-2ghz-and-beats-a-gtx-1080-ti-without-ai-cores/

It's the 2080 without using 2/3rds of its cores

>6% faster
>30% increase in price

Even Intel didn't price gouge their own market this hard.

I bought four r9 280x cards. Don't blame me. I just pulled the trigger on a rog strix 1080 ti OC for $540 shipped using the fleabay coupon code yesterday. If AMD is going to release a Navi based GPU that's better than 1080 ti for a reasonable amount of money, I'll sell my 1080 ti and buy that.

Either way, people who are overpaying for these 20 series cards with shitty price/performance ratio are retarded.

Why did that person disable 2/3 of the cores?

>1xxx series retains its price bracket
Just bought a new GTX 1080 for $348. Prices have been dropping like crazy *because* nvidia overbet on cryptomeme. Used cards are even cheaper.

>disabling

But that's wrong. RT cores (assuming they are fixed functions) don't shade; and Tensor cores are INCAPABLE of shading. 1/3rd of the card is Tensor, 1/3rd of the card is RT, and last 1/3rd of the card is shader. When rendering a frame, only shaders are necessary.

Any game that was released as of 6 months ago, going back to the dawn of computer gaming, will only utilize 1/3rd of the card AT ANY GIVEN TIME.

They're probably dropping cause Novidea realized that no one's gonna buy their 2xxx series cards with prices that high, but they need to get rid of excess inventory and they also need to recoup the R&D costs of the Turing uArch. Since AMD is console space w/ semi-custon and Navi isn't till next year--they might as well price drop and make some money quickly.

>They're probably dropping cause Novidea realized that no one's gonna buy their 2xxx series cards
Probably, but a 1080 for $350 is a pretty good deal, shitty monopolies aside. For 1440p I can't think of a currently released game that I won't be able to max out at over 60 fps (and most at 100+, provided I don't go overboard with 4x MSAA .

Enthusiasts will still eat up the 20x0 series cards, though. Won't be a 50% increase, but assuredly they'll be about in line with each new card generation's step up, performance wise. That's enough for the people who always need to have the latest/greatest. My cousin's friend is one of those types-- he makes about $80k a year, lives in a really cheap area, doesn't have a social life, and spends all of his money on excessive PC builds. Like 3-way SLI Titan Xp builds. People like him are Nvidia's enthusiast bottom line before they get their yields in order and push price drops.

>MSAA

the memory bandwidth requirements outweigh the benefits now with SMAA.

---

Honestly speaking though, I think Nvidia fucked up with leveraging Tensors for RayTracing in GPUs. They should have held it off for 5 or 3nm uArch, and matured it internally in their R&D labs. I think there's, potentially, a MUCH larger value in leveraging tensors for doing realtime geometry deformation for practically free performance in screen-space.

When GPUs go MCM, we're going to see fixed function return to the market where one die is purely tensor, another is purely ray-tracer, one is purely shader, fourth is purely physics, fifth could be purely AI, and six could be a master controller for scheduling/post-processing, etc. Ironically, this is exactly what the PS3 did with its Cell + Nvidia GPU. The GPU originally handled much of the rendering workloads, but as time went on, the Cell with its SPEs which were basically vector units (like GPU shaders) but were clocked to die (3.2GHz each), so they massively outclassed any GPU on the market in capability--and were special enough that they could also run some traditional CPU logic on them. This allowed many studios to do tessellation, borderline GI, and other rendering techniques that were not possible even on the most powerful of GPUs at the time PC side. Hell, physically based rendering & tessellation happened on the ps3 and 360 before it was officailly realized PC side.

That's essentially where Nvidia is going. Nvlink is essentially Infinity Fabric. They're just closing out and milking the last bit of monolithic die pie($)/performance while they can. AMD is too for that matter as they have yet to show any movement towards opening up Infinity Fabric like they marketed they would to the outside world. Geforce20 is remarkable in that you're getting a taste of what's to come in a relatively affordable package. The nearest card cost $3000 which is the TitanV. Sure this will get cheaper down the road.

1/3rd of the die isn't fucking tensor cores
nor is 1/3rd of the fucking die ray trace cores. Absolute fucking brainlets think the slide Jensen showed is the actual die layout. The fucking tensor/ray trace cores are a small %'age of each SM you absolute retards.

Attached: turing_sm_equals_volta_redux.png (2517x3333, 355K)

Bruh, the 2 tensors per SM compared to the ints, fp32 & 64s, across the entire GPU surface area would approximately add up to 33%. It may not be explicitly partitioned as 1/3, 1/3, 1/3. But when it comes down to it, its ~1/3.

Please stop trying to save face. Your flippant 1/3rd comment was based on the marketing slide Jensen used during Turing launch. It wasn't of the actual knowledge of how an SM is populated. I've included a shot of a Volta Die. SMs take up about 40% of the chip not 100%. 60% of the chip is for all of the I/O, interconnects, and supporting logic for the SMs. So, no... of the remainng portion of the die dedicated to SMs (40%) max.. Tensor cores probably take up 1/6th, and ray trace cores 1/8th. So, 1/6th of 40% and 1/8th of 40% are what those cores probably take up. If you have no use for them, buy a fucking 1080ti/1080/1070/1060. So much stupid bitching about these cards w/o any appreciation for the vast amount of compute they unlock.

Attached: derp_derp.png (1105x745, 856K)

Fan boys will eat up these cards.
Cost vs performance conscious buyers will avoid.
Move on.

>significant % of the GPU dies are active, but don't fucking do anything in 99.99999999999999999999% of all existing titles on the market
>good

Faggot.

why they used a oc 1080ti and an oc 1080 vs a highly golden sample oc 2080ti?

>without using 2/3 of its cores

i mean people BELIEVE this shit? if that is right then 2080 will be 27% faster than a 1080ti and 2080ti will be a massive 73% faster
i mean come on let that shit die already

>I fucked you all and don't feel bad about it one bit

Attached: DXfH-5bU8AAVVru.jpg (980x700, 122K)

Because it's not general purpose compute which makes it worthless for gpgpu. It's highly specialized compute for narrowly defined tasks. And good luck trying to do any general purpose compute with a gamer card from Nvidia. It doesn't introduce shit until you step up to the prosumer quadros.

>Nvlink is essentially Infinity Fabric
no its not and never will be for consumers

nvlink is basicly a retarded version of IF that just takes the pcie and slaps it directly into the cpu every single one of them which you know it cant happen for consumers

plus no nvidia wont go mcm for consumers amd new rtg ceo already explained that this is impossible unless game engines radically change their design to accomodate seeing those kinds of gpus as one
which you know they barely take time to develop anymore let alone to actually implement something like this

>tfw live in third world and 1080/1080TI prices only went up due to crypto and didn't come down and 2080TI is almost 2000$

Attached: 1527862525146.jpg (366x271, 37K)

>in today's episode: prices that only exist on Jow Forums
>up next: vega 64 scores 736 fps in 8k with 8x AA? and here's the kicker.. it wasn't even overclocked!!?
>and did this fa/g/ actually buy a pallet of 1080ti's brand new for 50$? find out.. after the break
>shit-weeb-music.mp3