Oh holy shit:

Oh holy shit: deepsense.ai/using-deep-learning-for-single-image-super-resolution/

Nvidia basically took this: github.com/nagadomi/waifu2x

And implemented as API/driver logic via its Tensor Cores with Convoluted Networks.

Required Reading: deepsense.ai/using-deep-learning-for-single-image-super-resolution/

Attached: waifu2x.png (1058x2382, 1.19M)

Other urls found in this thread:

arxiv.org/ftp/arxiv/papers/1707/1707.05425.pdf
github.com/igv/FSRCNN-TensorFlow/releases
nvidia.com/en-us/data-center/tensorcore/
web.cecs.pdx.edu/~fliu/project/ctxsyn/demo.mp4
twitter.com/SFWRedditVideos

Secondary reading: arxiv.org/ftp/arxiv/papers/1707/1707.05425.pdf

Can I use this to upscale Naruto to 4k?

Technically, if SVP added functionality to take advantage of DLSS while engaging in frame interpolation. You could theoretically waifu2x Naruto to 4K60 (anime) with very low noise and artifacts.

So yes, needs some work, but yes.

Too bad we'll never see it outside of some expensive proprietary software for professionals

No, there's a good chance we'll see it consumer market. Waifu2x was released in late 2015/early 2016. We're looking at GTX 980 generation. If mid-range and high end consumer has access to fixed function Tensors; there's a much greater incentive to push this kind of stuff out. Also, SVP is more Nvidia specialized than AMD specialized.

High chance we'll see DLSS with video interpolation to higher framerate within the next 3-5 years. Nvidia will never go back from this point on to a purely shader uArch model. Its now going to be Shader + RT + Tensor + any other future fixed function they can leverage to do more things all at once.

I can't believe there are weebs that are actually good for something.

x264 was lead by a weeb that eventually became a trap

waifu2x is shit

mentally ill people make some good programmers.

Except, that's exactly what DLSS is doing but a bit further. Its take a low resolution image and then using Convoluted Networks (DNN), scaling it up by analazing the color boundaries and increasing them exponentially.

Basically: image is fed into the DNN algorithm, it uses tensor cores to do an elder god tier levels of fast analysis of all color boundries and stores it into memory. Then with shader operations in parallel, draws a super sampled version of the data while mainting the color boundary and addressing any noise & artifacts as they crop up.

The end result is an image that at resolution & proximity to the player's focus (screen & distance between eyes and ability to discern pixels), can result in a final image that is of a significantly higher quality. Pic related is primary example, I'll upload a secondary & its comparison shortly after.

Attached: monarchCollage.jpg (1000x1000, 164K)

Source image.

Attached: 59ccf688-0417-49fd-c9c6-835f3c9b0057.png (800x1130, 702K)

Comparison image.

This 2nd picture went through the following process:

>Waifu2x @ 1.5x magnify & denoise
>2-D illust (Y Model)
>png
>output depth 8 bits
>split size: 512

Each spat out image was ran through HoneyView3's sharpening/blurring filters and saves, followed by feeding it back into Waifu2x for another one to three passes of 1.25 to 1.5x increases with each subsequent passes going through more sharpening/blurring of boundaries to assit the DNN algorithm running in CUDA.

Final image was then fed BACK into Waifu2x and down-sampled to >4MB size.

Everything I did above, is essentially what DLSS is doing at the driver level. Its a whole lot of upscaling, reshading, blurring & sharpening with multiple passes to get a final image that at distance looks superior to the original.

Also, when you view this image at 100% and compare it to the original, you can see a decent loss in quality. But when you view it as a thumbnail (which inherently is larger than the original picture resolution), you get a "final" image of SIGNIFICANTLY higher quality.

Attached: yoruichi.png (2176x3074, 3.39M)

Where and when can I use it?

>1+ years old articles
>nothing about Tensor Cores in them
The fuck is this bullshit thread?

-D illust (Y Model)
Can you explain what Y Model is? I just slap UpRGB on everything.

Another good effect that I noticed from using waifu2x is debanding. How does that work?

You can already use these meme networks with madvr or mpv.

>smudge2x
It's alright but pretty overrated

Wait a minute wait a minute
hold on

Does this mean the old and widely mocked "enhance the image" trope will eventually work in real life?

yes, use mpv
github.com/igv/FSRCNN-TensorFlow/releases

No, not really. It works best with drawings relying on clearly defined line art (chink cartoons).

Only if you have supercomputer as one frame takes several seconds or even minutes to render.

You drank the novideo kool aid hard dude. you can train smaller networks that work just fine for real time use RIGHT NOW.

I have no idea what you just said.

That's on CPU, on GPU it takes several milliseconds.

There are many upscaling convolutional neural network models, some of which can already run in real time on cheap consumer graphics processing units. Despite the lies Papa Jensen fed you, you do not need a supercomputer for this shit.

Read the damn article you linked. FSRCNN (which is a fast, optimized implementation of SRCNN that the article discussed) is already mature and is used in NGU / FSRCNNX in madVR/mpv for real-time frame upscaling in video. Whatever novideo is cooking up might eventually be "better" but it's definitely too slow for real-time.

Nah, with Tensor Cores it should be fast, but it will be nvidia only.

These are actually plenty fast on current GPUs without tensor cores.
I really think the tensor cores shit is a mistake.

>Nah, with Tensor Cores it should be fast, but it will be nvidia only.
The article literally said the problem with SRRN's is that its slow as shit, but whatever you say man.

Care to give a link to some actually working ones for madvr? I tried a literal implementation of waifu2x in avisynth once and it was unusable. Also I don't know why the fuck are you acting like I'm op.

The article is 1 year old and not about nvidia or tensor cores, OP is a fag.

install mpv

madVR uses one under the hood. All variants of NGU are basically FSRCNN.

Oh well, it's not even close to waifu2x quality. I don't know what this thread is about anymore.

You tend to notice these things less when you're not pixel peeping between static images and actually watching something in motion. If it's going to be used in gaymes it has to be fast. And even then the ones around are already quite good compared to most linear scalers.

>not even close to waifu2x quality
It's the same shit only smaller, waifu2x is a huge network.

user pls. Tensor cores specialize exclusively in Convoluted Networks. That's their entire purpose. Nvidia's been experimenting with Tensors in their Quadros for years now. But they were too expensive to scale into the consumer market due to needing several years of research in designing and writing software that could leverage them at 16ms or less requirements.

Tensor = DNN. Prior to this HARDWARE, all DNN was done IN CUDA using traditional shaders. A shader doing DNN and a tensor doing DNN is night and fucking day levels of performance. The former is trash and the latter is practically magic. Just because the article doesn't mention Tensors in no way means its bullshit. Way to fail to see the forest for the trees.

>Explain Y Model

I honestly don't know the details here, but my best guess is that he various types of rasters and vectors and fed them into his DNN model, and optimized that model until he got the best possible outcome he could get at the time, with the hardware he had; and the end result is the models we see. That said, I believe Y model refers to a vector/raster combination model and I've found it to generate the best possible results amongst all available options. Take this explanation with a grain of salt, googling the topic further might help or just parse through the source code, hope there's comments and try to understand what he wrote.

The whole point of the article was to give an explanation on what DLSS is you massive faggot, not what its running on or supposed to run on. Jesus christ.

>random bullshit again
I'm telling you that hhese articles have nothing to do with nvidia developing a super resolution. Where did you get that info?

>night and day
>fucking trash
Nvidia's own marketing charts gives your tensor cores a 3x (at best) gain from the previous generation. I don't think you know how slow any of these DNN shits are - even with a 3x speedup they're nowhere close to doing anything fast. They trained SRRN for 7 days on multiple Titan Xs for that miniscule gain over a network that's already usable in real time (SRCNN). Cut that into 2.5 days and that's still shit.

nvidia.com/en-us/data-center/tensorcore/

Dude do you ever watch anime? It's static pictures 80% time. Waifu2x provides nearly perfect upscale from 720p while ngu is blurry. it might be better than other upscale algorithms, but so is waifu2x compared to this.
Here is ngu.

Attached: [ANE] Soredemo Machi wa Mawatte Iru - Ep01 [BDRip 720p x264 FLAC].mkv_snapshot_03.01_[2018.08.30_08. (1920x1080, 229K)

And here is waifu2x. The difference is like between youtube video and bd rip.

Attached: [ANE] Soredemo Machi wa Mawatte Iru - Ep01 [BDRip 720p x264 FLAC].mkv_snapshot_03.01_[2018.08.30_08. (1920x1080, 277K)

source pic?

you absolute moron

it's only slightly sharper. You need a 10 times bigger network for that? Just turn on a sharpener in madvr. Pathetic.

where is the source pic, retard

And so is ngu compared to more lightweight algorithms.

Attached: [ANE] Soredemo Machi wa Mawatte Iru - Ep01 [BDRip 720p x264 FLAC].mkv_snapshot_03.01_[2018.08.30_08. (1280x720, 113K)

no, it also removes artifacts like compression, ringing, aliasing.

You really picked a wrong subject to shill your nvidia garbage

>SVP
SVP uses motion vectors. A better solution is to use a convolutional neural net, eg.
web.cecs.pdx.edu/~fliu/project/ctxsyn/demo.mp4