The Turing architecture also carries over the tensor cores from Volta...

anandtech.com/show/13214/nvidia-reveals-next-gen-turing-gpu-architecture

>The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta.

>Alongside the dedicated RT and tensor cores, the Turing architecture Streaming Multiprocessor (SM) itself is also learning some new tricks. In particular here, it’s inheriting one of Volta’s more novel changes, which saw the Integer cores separated out into their own blocks, as opposed to being a facet of the Floating Point CUDA cores. The advantage here – at least as much as we saw in Volta – is that it speeds up address generation and Fused Multiply Add (FMA) performance, though as with a lot of aspects of Turing, there’s likely more to it (and what it can be used for) than we’re seeing today.

So many uneducated morons on Jow Forums and the Internet were saying it was nothing but a Pascal refresh, enjoy eating crow now

Attached: Turing.jpg (3316x2206, 1.9M)

who cares? That's going to be obsolete once 7nm gpus come out early next year
enjoy your soon-to-be shit

AMD is too busy with CPUs to care about 7nm GPUs, and Nvidia won't release a new arch a year after they launch.

>early next year
Aww that's cute.

Never lose your purity my sweet user

Attached: 1454266910117.png (2472x2164, 1.62M)

The 2080 Ti will most probably only be launched next year on 7nm, so the possibilities open up.
Remember that AMD will not intend to go after the high-end GPU section.

New process node =/= new arch

The hell is going on? Did they skip Volta?

>Volta
Umm, they revealed it last year.

Attached: NVIDIA-Titan-V-2060x1159[1].png (2060x1159, 1.24M)

I thought that was only Volta-lite or something. What about actual gaming cards?

>Volta-lite
There is no such thing.
Volta is late-2016 tech.
Turing is 2018 tech right now.
>The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA’s other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at.

Oh, so they did skip Volta, at least on consumer cards, and only used it for AI and specialized shit.

> What about actual gaming cards?
It can play games.

>die size
>754mm2
12nmFFN confirmed

520mm2 2080 incoming

Radeon group is independent from CPU division.

Not what I asked.

>Not a single Volta based Geforce out there.
>It's 2016 tech

>Here's Turing, guys!
So GTX2080 will be what?

Nah bra theyll be on the 3080 Ti easy by then

turing

>Remember that AMD will not intend to go after the high-end GPU section.
Current RTG CTO said the opposite.

The real questions we should have are, is it possible to utilize these tensor cores for current lighting models? Are they openly porgrammable or do they use only the Nvidia raytrace solution? Can the tensor mathematics units be utilized for mining programs? Is the performance per watt of the new GPUs so attractive that mining businesses fuck the hobbyist consumer even more?

Coin mining (valuation and profitability) has necessarily followed the growth of GPU performance, i.e. a plateau effect occurs once the mining market is saturated with the new technology.

Lets be honest here. These GPUs are very large and built on a custom Nvidia-only 16FF derivation. They are going to be expensive from the start and the better they perform in coin mining the more retailers are going to charge to meet market demand.
I would not be surprised if the 2080 is a $799 MSRP part (similar situation as the 680 > 780) but you'll be lucky to get one for less than $1100

Volta was nvidia's vega. It had no features that can improve gaming perfomance so there was no reason releasing gaming cards. It performed very close to maxwell in games.

>So many uneducated morons on Jow Forums and the Internet were saying it was nothing but a Pascal refresh, enjoy eating crow now

oh yeah like repurposing matrix cores and branding them ray tracing engine is something innovative

literally only 7% more cuda cores and no change to the topology what so ever
hmmm

You faggets. Turing is part Volta but better and isn't made to be a one trick pony like volta was.

>nvidia kike shill cuck damage control

still 70% faster than AMDs best card so it's alright

I will wait for the game benchmarks before I splurge on a new card.

I'd take architecture over process

Correct.
Volta will never come to Geforce or even Quadro. Turing looks to be derived from Volta, with additional improvements.

>literally only 7% more cuda cores
What are you talking about?
RTX 8000 is at 4608 cores, GV100 is at 5120. That's 11% less, yet RTX 8000 manages to have 7.5% increase in TFLOPS (FP32).
>no change to the topology
What is that? Do you mean the uarch? Because we already know it's significantly tweaked.

Amd is finished

gay as fuck

Maybe Lisa can sell her body on the side. Jensen clearly enjoys fucking her.

>Volta will never come to Geforce or even Quadro.
Quadro GV100 is a thing.

I doubt the 2080/1180 is going to be based on the 700mm chip, that's titanRT $3000 tier.

But the actual 2080/1180 is still going to be large and that expensive, my guess is a ~1080ti sized chip with 256bit gddr6, a suitable clock bump and the ray tracing gimmick attached, this close to 7nm I don't think there will be that many versions though, lower tier cards could even be rebranded gp104 as nvidia has already been confirmed by tsmc to be a producing customer right now.

just wait

then source? I am sure that Navi is not high-end GPU.

When amd is seriously getting in to the high end GPU market? Prices are out of hand now.

Ew, incest

I read that Navi will come out in 2021. I don't think it is a high end GPU. More like the replacement of VEGA (mid end GPU).