How EXACTLY does DLSS work?

How EXACTLY does DLSS work?

Attached: rtx.jpg (1209x680, 143K)

Other urls found in this thread:

analyticsvidhya.com/blog/2018/03/comprehensive-collection-deep-learning-datasets/
twitter.com/AnonBabble

its just an upscaler, like consoles have been using for many years to make a 4k output from a lower resolution framebuffer
enjoy your smudged textures and low quality "4k"

It doesn't

This. We have the horsepower, just use supersampling, every GPU these days supports it natively, there's no need for AA hacks. Only reason AA hacks are still in development are consolefags.

>its just an upscaler
That's not how AA works idiot

>2D RTX ON vs 3DPD RTX OFF
AMD POOLARIS BTFO'D ETERNALLY

Cherry picking

>we have the horsepower for 64x SSAA
what resolution do you play at?
320x240 on games pre-2005?

>Just render 16K and scale it to 4K dude
Epic. So intelligent.

The 4K image is rendered at 1440p.
The 1440p image is then upscaled to 4K.
The AI algorithm on the tensor cores fills the gaps.

The AI algorithm is created at Nvidia's HQ where they have a supercomputer called "Saturn 5".
They feed trillions of images, rendered at 64x supersampling, so it can learn how to do the job better.

The result is a 40% increase in performance, and a better AA than TAA.

Nvidia will also release DLSS2x. This will not make any changes to rendering resolution, and only use the AI for AA purposes. There will be no performance gains.

Attached: nvidia dlss benchmark.jpg (856x293, 36K)

Good explanation but it feels more like
1. Upscale image
2. ???
3. Magic
4. AA at 40% improved performance

Sufficiently advanced technology is indistinguishable from magic.

DLSS isn't perfect, the plates and the logo of the car above them are worse than the original 4K image, but the car itself is better and the window transparency on DLSS is far superior to traditional TAA, with +40% performance on it.

I imagine that using DLSS2X (no performance gain) would keep the number plates crisp, while still improving the jaggies and the window transparency dramatically.

Attached: nvidia dlss carplates.jpg (1516x906, 282K)

You don't need that, double (8K) on a 4k monitor will already give you far better results.
Learn how DLSS works.

does it remove shimmering like TAA does though?

The new $1400 GPUs can barely handle 4K at 60fps, in many games they can't do anything close to 60fps at all.

How are you going to supersample 8K?

Don't be ridiculous, depending on the game you can easily run it at 70-140 FPS on a last gen GPU. It helps if it's Vulkan.

like this the interesting thing is that on 4k a 2080 with DLSS can reach the same performance like a 2080ti with TAA. DLSS is the thing i would be most hyped about if i had a 4k screen

It depends on the game, you can do 4K60 on Battlefield 1 with a GTX 1080 Ti or an RTX 2080 but on many other games its not so easy.

Attached: 100916[1].png (600x250, 28K)

8k is 4x more pixels. You are insane if you think any GPU can render modern games at 60fps.

its not aa, stupid. its literally an upscaler
keep falling for novideo naming memes

You mean it uses a neural network to upscale the image (which adds fidelity unlike traditional algorithms that extrapolate) then they downsample that upscale image to native res using biqubic downsampling or a similarly fast algorithm? If so then that's kinda brilliant.

>it's better for shit to be rendered at lower resolution and upscaled with "Neural Networks" than for it to be rendered at high resolutions in the first place

It's better to get "neural network" upscales at 60fps than native res at 40fps
Especially when the image is so good you cant' tell the difference

>How EXACTLY does DLSS work?
First of all, there are 2 modes of DLSS, though so far we've only seen one of them.
>"regular" DLSS
DLSS in this mode basically does upscaling + AA. The game renders an image below native resolution (say 2560x1440), then the neural network running DLSS on tensor cores upscales it to native resolution (say 3840x2160) while also applying AA. It does this by using per-game data trained on NVIDIA's server farm. The training is done by feeding the neural network 1440p images and comparing its output to 4K images rendered with 64x SSAA, then adjusting parameters until the output is as close as possible to the real 64x SSAA image. Those parameters are sent out via GeForce Experience, as I've mentioned on a per-game basis.

The practical result is a sizable performance increase (it's rendering at low-res) and an upscaled image. The AA provided is nice, but the image is still very obviously an upscale. It looks blurry and detail is lost, since the input is still just 1440p. I've been playing games at 4K for ~3 years now and I'd never use DLSS, it kills the whole point of 4K right off the bat (sharpness & detail).

It might be useful in conjunction with ray tracing though, i.e. you trade sharpness for ray tracing since ray tracing is too slow otherwise.

>DLSS 2X
DLSS 2X works in much the same way as "regular" DLSS, except it does not actually upscale. DLSS 2X essentially just adds AA. You feed it a native 4K render of the game and its output is a 4K image with AA applied. It still does this based on training data using 64x SSAA images.

DLSS 2X will not have any sizable performance boost since the game is rendering true, native 4K. It does however have the potential to provide high-quality AA while also maintaining excellent sharpness and detail (no upscaling), so this actually looks like very interesting tech. This could essentially be high-quality AA with no performance penalty.

If you have a 1080p monitor, you can't display a higher fidelity picture. So the only way to reduce jaggies is to use traditional AA methods, or to use super sampling solutions. You can render in 4k and downsample to 1080p today, both AMD and Nvidia supports it. But that is very taxing for the GPU.

DLSS use the tensor cores in the GPU to perform the upscaling, so it's much less taxing. It's basically upsampling in hardware. Then it use normal compute cores to downsample to 1080p, which is not that compute intensive. The result is similar to normal super sampling AA but much less taxing. So you can get the same result with better FPS.

Make no mistake, I hate Nvidia and I have AMD stock, but I still think this is pretty cool use of the technology. The 2080s are after all just the binned trash that's left over from their compute card production, so they figured they could use the meme learning cores etc for consumer stuff as well.

checkerboarding where the missing detail is added in by a neural network

>Especially when the image is so good you cant' tell the difference
yeah just like post-processing anti-aliasing right
upscales will never look as good as the real thing

>which adds fidelity unlike traditional algorithms that extrapolate
neural network is extrapolating
you can't 'add' fidelity

And triangles will never look as good as quads.
We're constrained by the technology of today and today can't render 16K @ 60fps to downscale to 4K.

>neural network is extrapolating

They are actually sampling from a huge data set of images, putting in data that wasn't there in the original image. So it increases the signal to noise ratio actually, however the data it puts in are guesses and only aproximate what was there to begin with. Picture related.

Attached: google-brain-pixel-recursive-super-resolution-1.jpg (616x347, 48K)

>And triangles will never look as good as quads.
You can make a quad with two triangles

They're sampling from a huge data set of images that aren't the image being upscaled. You aren't adding fidelity, you're making an educated guess - extrapolating

>You can make 4 verts with 6 verts
braindead.

no you can share 2 of them

Index buffers my dude.

>You aren't adding fidelity, you're making an educated guess - extrapolating

Well you're getting an order of magnitude or so closer to the ground truth than traditional extrapolation methods. But because you are downsampling the whole image afterwards, you're getting an even better signal to noise ratio. But the important factor here of course is the edges, which will appear smooth.

>the plates and the logo of the car above them are worse than the original 4K image
it's not only that, whole image looks worse, how can't you see this?

not him but not only is the DLSS one sharped in the background but it also removes aliasing better on the car

>traditional algorithms that extrapolate
You're a dumbfuck that likes to use words that you don't know the meaning of.

extrapolation is the method of using the sample set in order to create a new sample that is outside of the sample set
interpolation is the method of using the sample set in order to create a new sample within a sample set
I bet you're the retard last week that said "higher resolutions add more aliasing"

>extrapolation is the method of using the sample set in order to create a new sample that is outside of the sample set
that's what is happening, yes
the fact that it's using an ML algorithm trained on other images doesn't change that

I can't see jaggies in motion, so prefer no AA at all, it looks clearer. Honestly both looks terrible.

No, you're creating new pixels within the boundaries of the frame. That's interpolating. You're not creating new images that are outside of the frame boundary. The boundaries of the sample set are the desired resolution.

Ad hominem instead of attacking the facts suggests that you're the brainlet here, not me.

Some neural net upscaled images I've seen had sea shells in the skin of a person, because part of the training data set had images with sea shells on sand. How do you explain that?

>Make no mistake, I hate Nvidia and I have AMD stock, but I still think this is pretty cool use of the technology. The 2080s are after all just the binned trash that's left over from their compute card production, so they figured they could use the meme learning cores etc for consumer stuff as well.
I'm trying to think of other use cases for this deep learning algorithm tech they've heavily invested into the RTX cards. How about if they shipped with a colorimeter and light sensor that would calibrate your monitor and game color palettes in real time based on measured reading in the screen or LUT in conjunction with the G-Sync module with adjustments constantly being balanced based on ambient room lighting and wallpaper color?

>fallacy fallacy

you're extrapolating frame data to interpolate pixel data. all the images that the ML is trained on aren't in the data set. The difference is extrapolation can create errors, interpolation doesn't

Asking the real question, how long till people can use the tensor cores to make faster deepfakes

Upscaling algorithms tend to have numerous attributes that make them different from each other. The amount of haloing, sharpness, ringing, etc. That one had an algorithm that produced sea shells along with all the other shit. It still did not create new information outside of the frame. It still moved the samples at the edge farther apart from each other and made new pixels in between the existing pixels.

>you're extrapolating frame data to interpolate pixel data
word salad from someone who doesn't understand big words. That's now how these algorithms work. What you're saying is "creating pixel data outside of the frame to create pixel data inside the frame" which doesn't make sense at all.

Attached: pepper processing.png (640x400, 26K)

you are the /tv/ pedo poster

The data set used to interpolate the image is an extrapolation of what all possible video game frames will look like

>The data set is an extrapolation
The data set is the data set used to create an algorithm for interpolation. Nothing is being extrapolated. Just stop posting.

Yes, the data set was arrived at via extrapolation, which is the ultimate flaw in the algorithm

>the

>the data set was arrived at via extrapolation
holy fuck, no that's not what it is. The data set is the data you start with. The data set is not the result of extrapolation. My library of cat pictures that I use to train a neural net is the data set. Nothing is extrapolated to magically create a library of cat pictures in my computer. THE DATA SET IS THE SAMPLE SET. THE SAMPLE/DATA SET IS THEN FED TO WHATEVER PROCESSING BULLSHIT THAT YOU WANT TO USE IT FOR.
You're just throwing big words at the wall to see what sticks.

>extrapolation
>the action of estimating or concluding something by assuming that existing trends will continue or a current method will remain applicable.
The data set is that conclusion. The algorithm doesn't work if you start rendering images that don't relate to the images the data was dervived from.

>be nvidiot
>use 4k for high quality
>turn on dlss
>see everything worse
>hurr durr performance is improved!
LMAO

No the real question is when will they be able to hardware accelerate waifu2x + DLSS2x

Attached: slide.png (1058x2382, 1.19M)

>The data set is that conclusion
You have absolutely no idea what you're talking about. The data set is the data you start with. You use that data to train a neural net to produce an algorithm. That algorithm is then distributed for whatever purpose it serves.
Here are examples of what a data set isanalyticsvidhya.com/blog/2018/03/comprehensive-collection-deep-learning-datasets/
Also, your description clearly points out how the predicted data is clearly outside of the available sample set. You're predicting a trend. you want to see how it goes in the 4th quarter. Your sample ends at the 2rd quarter of current year. That's your sample boundary.
Likewise, you have pixel A1 and pixel A2. That's where the boundary is. You want to predict a theoretical pixel A3, that's extrapolation.
For interpolation, you remap pixel A2 to pxiel A3 and then you use the current pixel data A1 and A3 in order to find a new pixel A2.

Again, you're just throwing big words at the wall to see what sticks.

>pedo

consider a bullet to the head.

>You're predicting a trend
thats what extrapolation is you fucking dumbass
like I said, you're interpolating based on an extrapolated ML configuration

>extrapolated ML configuration
ABSOLUTE FUCKING DUMBASS CAN'T ADMIT AND STILL CONTINUES TO THROW WORD SALAD
No trend is being predicted, you fucking dumbass. A neural net is trained to create an algorithm. Data set is used to create an algorithm. Your games produces a frame, that algorithm is used on the frame. No new data is being produced outside the data set that is made to resemble the data set. No new data is being produced outside of the data frame. This ML nvidia used to train isn't meant to create new data, it's not meant to extrapolate. It's not pulling frames out of thin air the same way that one neural net is creating faces out of thin air. It's not doing that. It's creating a scaling algorithm (either an equation or a set of equations) to apply to already existing newly created data that exists outside of the data set.
Fucking /v/tards thinking they know it all.

Show me how to do it.

>No trend is being predicted
you're predicting that all images that use the algorithm will be relatable to the data you used to train it

question:

Does Nvidia use a separate training data set for each game, or one general purpose data set which can be used in any situation?

general purpose. you think they're going to personally train it with every game that comes out?

That's not a prediction from a neural net and the complex mathematics it produces. That's an "intended use case" for the algorithm.

and the intended use case is an extrapolation

>intended use case is an extrapolation
HOLY SHIT! /v/kiddies are really this retarded. That's not a fucking extrpolation. The intended use case is a boundary set by whoever the fuck sets it. That's not fucking data produced by the ML code. You can stop posting any time you want to save yourself from further embarrassment.

>The intended use case is a boundary set by whoever the fuck sets it.
The intended use case is a general-purpose AA algorithm which made based on existing game data with an assumption that future game data will be similiar
So if you change your rendering style it might fuck with it

>you think they're going to personally train it with every game that comes out?

I thought DLSS only works on a handful of games anyways?
General purpose is far more impressive of course.

DLSS is per game, more will be added in driver updates. Kinda expected I guess since they need access to the engine to even get it to do 64x supersampling for training.

Attached: 1700x660-25games[1].jpg (1700x660, 363K)

>with an assumption that future game data will be similiar
That assumption is an assumption made by programmers during the planning phase. It's not data extrapolated by the algorithm produced by the neural net.

Correct

It works on the game if the game programmer bothers to program it in, of course you can apply the algorithm to anything you want, and it'll be far more accurate if it's trained on all games and not just the one you're playing

>being an amdrone without the chance to play anything maxed out on 4k at all is clearly the superior choice

>it'll be far more accurate if it's trained on all games and not just the one you're playing

I don't see why.

If a game has a very specific look, say futuristic science fiction, then I don't see how training on a wild-west Indians/cowboys game style would help.
And the total amount of information stored in the algorithm will always be limited.

That's a good point I guess, a specific solution is better than a general one, the problem is then you have to specifically train it with every game

It takes years to make a game, sometimes with hundreds of people.
Surely a few weeks on a supercomputer isn't going to add much to the budget, especially with Nvidia willing to sponsor the supercomputer so they can sell more cards.

The only problem there is that a lot of developers like to add fine details to game models (like hair highlights, clothing accents, etc) at the last possible stage so you'd be lengthening the developer lead time since it can't be done concurrently.

well thats a few weeks you didn't have to spend before
and a general solution is probably going to work fine, I mean this algorithm already exists without any machine learning involved, just a hand-tweaked kernel

>text becomes unreadable when DLSS is used
b-but it works through glass!!!!!!!!!!!!!!!!!!!

delet this you fucking AMDiscount moran!!!!!!!!!!

>not using the superior temporal anti-aliasing + sharpening and get 0% performance loss without jaggies

Attached: temporal anti-aliasing and sharpen.jpg (3900x2430, 1.79M)

>temporal anti-aliasing
laggy

>temporal anti-aliasing
>posts screenshots
>when the biggest problem with TAA is artifacts and ghosting while in motion
It's like comparing the upperbody of two sprinters, while one is in a wheel chair and the other has legs.

Looks fine in a static image, then goes to shit the second you move the camera. How useful.

>traditional TAA
>actually going along with the idea that TAA is 'traditional' AA and not a retarded buffer using approximation that they're using for comparison to make it look like regular AA has meaningful disadvantages compared to this or produces artifacts

For fucks sake,

So I'm making my own engine with Vulkan and have full control to implement anything I want.

What do I use? I was thinking implementing just MSAA + FXAA

the real question is what kind Code of Conduct are you implementing ?

I've got a cock already

>What do I use? I was thinking implementing just MSAA + FXAA

>MSAA in a modern partially deferred rendering engine
Uh-huh, ok genius. Solve a problem that seven-figure engineers can't figure out

I like how all of the AAA modern games suck ass anyways

>MSAA
LOOOOOOL
TAA and SMAA :)

At 4k plus how important is anti aliasing? In my experience antialiasing loses importance at 1440p. Dont get me wrong I still see jaggies at 1440p and use anti aliasing. But as soon as I play on my 4k tv at native 4k@60 aliased edges are barely noticable... It even seems worse with anti aliasing on.. The image gets blurrier..
Maybe I'm looking at this the wrong way... If people are running ray tracing games and they are stuck at 1080p... They might need all the cheap antialiasing they can get...

Lucky OP

It renders at a lower resolution, then the RTX cores in the gpu upscale using neural networking trained specifically for the game


Basically, it's Waifu2X for videogames in realtime

You're running a 1440p that looks comparable to heavily anti-aliased 4K at a lower hardware cost, and judging by the shots people were spamming to "Prove TAA at 4K looks better" it looks sometimes sharper on foliage and handles complex jagged edges like hair way better

>which adds fidelity unlike traditional algorithms that extrapolate
And how do you actually add fidelity without extrapolating???

>you think they're going to personally train it with every game that comes out?

That's literally how it works though. The devs have to apply for gimpworks, then nvidia gives them the NGX API which will allow the game to use DLSS, then the supercomputer runs the game until it gets the best result possible and the data is then downloaded via geforce experience to be used by neural network via tensor cores.

The fact that you need gimpworks and shit means that only some AAA titles will get it.

>pay out of the ass for a good 4k monitor
>pay out the ass for Nvidia's shill cards
>what you end up doing with both of them is stare at an upscaled 1440p image that might as well have been passed through a blur filter
Money well spent

Both DLSS and TAA are fucking garbage. How you morons actually put up with this crap, AT FUCKING 4K NO LESS, is beyond me.

Just disable AA altogether on 4k, or if you're an AA addict then put the resolution to 4k and use 2x MSAA or possibly SMAA. You'll have a MUCH better image and good performance.

Stop falling for Nvidia's trash.