Is it better to encode video using CPU or GPU nowadays...

Is it better to encode video using CPU or GPU nowadays? I want to get back into anime and other shit encoding after seeing such shit tier stuff released this year but do wonder if video cards are better now, when I did the shit 9 years ago it all about CPU.

I also need to read up on if this x265 shit is really worth it.

Attached: 1520874880319.jpg (1920x1080, 84K)

Other urls found in this thread:

youtu.be/ofdh-THQFpE?t=464
twitter.com/NSFWRedditImage

It's gotten far better since 2012. Most modern GPUs provide for a faster encode, but there aren't as many options afforded you that provide both speed and filesize advantages than what the CPU can do. Nvidia Share (which uses NVENC) put FRAPS and Dxtory out of business for most casual game and desktop recording.

x264 is still where it's at, though 10 bit and x265 is on its way for high quality releases.

CPU encoding quality is much better.

It's 2018, how have you fucks not figured this out yet?

x.264 looks the best to me

CPU is always better quality.

Why, though?

2 pass

youtu.be/ofdh-THQFpE?t=464
as you can see, there is no quality difference at all

Of course there is no difference when uploading to streaming sites that have terrible bitrates ...

LTT did a comparison and CPU was obviously clearer.

The multimedia engine (like Quicksync and Nvenc) in most GPUs are essentially blackboxes and very difficult to control. Most of the time, they'll only offer very limited options on how encoding is done. Not only that, but since you have absolutely no idea what's going on inside, results are always not going to be what you expect. If you want to tweak every single encoding option and you want to be 100% sure of the output, you're better off using software. The only good hardware encoding is lossless if the multimedia engine offers the option.

Might as well ask here.
What would be necessary to make high quality encodes with VP8/VP9? And yes, I know that x264/x265 are better suited for the task. I'm still interested though.

cpu. gpu encoding is a meme for when quality doesnt really matter.

>All that aliasing on 264/265

265 is the fuckint worst
loses so much fine detail its insane

probably people that loved YIFI that uses it

Video encoders rely on very complex algorithms, which are not easy to execute on "dumb" GPUs.

Comparison in OP's pic is garbage. Source is interlaced, MPEG2 pictures a different frame or a crop, and x265 is way too blurred against x264 for similar settings and a recent encoder to have been used.

Huh. I thought most rendering was done by the GPU, but this thread taught me otherwise. Aren't CPU super super slow for this though? What kind of CPU one would even need to not die before the rendering is done?

Rendering the raw pixel data of the frame with all the fancy post-processing is done by the shader units on the GPU or the rendering part of the multimedia engine. Compression of said frame to a specific codec requires either software or the video encoding part of the multimedia engine.

Raster video encoding and 3d rendering are totally different operations.

its bc x264 65 cant do interlacing. thats interlaced video

man, imagine 7mbps h.264 dvds