Waifu2x

Has someone here made this work? I can't get it to work. The online version has resolution limits

github.com/DeadSix27/waifu2x-converter-cpp

>"C:\Program Files\Waifu2x\waifu2x-converter-cpp.exe" --processor 6 -j 12 --scale_ratio 3 --noise_level 3 -o "C:\Users\User\Desktop\Tokyo_4.jpg" -i "C:\Users\User\Desktop\Tokyo.jpg"
CUDA: GeForce GTX 1080 Ti
Operating on: C:\Users\User\Desktop\Tokyo.jpg
it doesn't make an output file

Attached: 1533628837023.png (1058x794, 388K)

Other urls found in this thread:

github.com/lltcggie/waifu2x-caffe/releases
github.com/alexjc/neural-enhance
twitter.com/SFWRedditImages

kys

Attached: 1537381548380.jpg (960x536, 44K)

github.com/lltcggie/waifu2x-caffe/releases

>download release
>get cudnn library from nvidia
>put library in same folder
>run program
oh gosh, that was so hard

Attached: [gay silence].jpg (480x480, 17K)

where does what go? also what does split size do? all documentation is japanese.

Attached: 1518879046524.png (716x466, 35K)

>he can't read japanese

Attached: kimoi.jpg (588x506, 56K)

I figured out the cudnn library part. but what is split size for?

Why did you delete it?

Attached: 67955600.png (1080x360, 45K)

ignore it

because the GUI is at least english now. but none of the documentation is.

>Has someone here made this work? I can't get it to work
Download the cafe version.

Attached: weebfu.png (1008x485, 41K)

welp, even though it has photography models, it sucks for real photos.

Attached: 1527154559521.jpg (1800x1596, 324K)

Works fine for me, are you sure you're selecting the right processor? CUDA doesn't work, it says so in the release. This is actually a useful find to me, I had some waifu2x version which was CPU-only, suffice that say that was slow as shit compared to this.

CUDA is a must for me, but this user's suggestion is better: than what I had (in the OP)

>it sucks for real photos
I used it for real photos. It's not bad unless you're trying to do too much with it. It's decent for 2x, but more than that starts too look weird.

Yeah. waifu2x for drawn content, github.com/alexjc/neural-enhance for photos

I dunno, I can't read it either.

I see, I'm honestly happier not having to download another library. What does CUDA do better? Is it just faster than OpenCL? I can't imagine it having different output.

Use waifu2xcaffe
Use Cuda cores
Use TTA if you're pushing it

the results look great, but it seems like a pain in the ass to install and use.

>What does CUDA do better? Is it just faster than OpenCL?
Mostly just comes down to that nvidia makes it very easy to use cuda. Almost like nvidia hand holds you to implement it.

>Use Cuda cores
do I need to set it up, or does it use my GPU automatically?

Click App Setting
Check CUDA
Click OK

Yeah, but what advantage is there in this case when you're just running a waifu2x binary? Why do you want the binary to use CUDA instead of OpenCL to perform its function is what I was really curious about.

Unless you install it via docker the usage is pretty much the same as waifu2x. Model type and zoom factor are the two most important options.
Installation is pretty straight forward on Linux. No idea about Windows.

>Waifu2x
>Using x2 and then x2 on the result pic gives better results than directly using x4

This. I've been using this one with CUDA. You have to download the CUDNN dll from Nvidia yourself because of idiotic licensing on it, but it's worth it. Even with my shitty GT 730M it's way faster than running it on CPU.

If you want to train a neural net for 4x directly, go right ahead. Until then we'll just use 2x twice.

>github.com/alexjc/neural-enhance for photos
if only this had a simple gui and installer

thx, user. i was looking for a waifu2x gui.

Going to have to agree with this user here.

why the fuck are you on Jow Forums if you can't?

Well question is if enhancing photos with AI and filling in things that where never there makes any difference with comparing it to the original.

After all if they perfect it you could never tell which is the original either way.

The original is better is some shitty fallacy either way. Future is looking bright.