In fact, the newest Nvidia GPUs are outperforming the RX 480, 570, and 580 by more than 3x on encoding latency

blog.parsecgaming.com/new-nvidia-gpus-outperform-new-amd-cards-on-h-264-compression-latency-d32784464b94

>In fact, the newest Nvidia GPUs are outperforming the RX 480, 570, and 580 by more than 3x on encoding latency.

blog.parsecgaming.com/nvidia-nvenc-outperforms-amd-vce-on-h-264-encoding-latency-in-parsec-co-op-sessions-713b9e1e048a

>Nvidia’s NVENC is approximately 2.59 times faster than AMD VCE and 1.89 times faster than Intel Quick Sync. The median encoding latency for an Nvidia card is 5.8 milliseconds; whereas, the median encoding latency on VCE is 15.06 milliseconds.

THANK YOU BASED NVIDIA

Attached: 1 fKgZJuB5KBTOAyOREIN2BQ.png (1732x840, 117K)

Other urls found in this thread:

video.stackexchange.com/questions/14656/why-processor-is-better-for-encoding-than-gpu
forum.beyond3d.com/posts/2030018/
github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-285
github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-380
github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-fury-x
github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.4#rx-470
twitter.com/AnonBabble

Attached: 1 5xUHXVgxKGO3J4_nL4v7tA.png (1444x836, 63K)

>new
>470
>1070
What year is it?

How does this translate into real world performance? Does it matter if it has high latency since networking is a much bigger concern?

> AMD 400 series Release date 29 June 2016
finally.
amd is what and what?

>encoding on GPU
it's like you hate picture quality or something

yes, it's completely useless in any common real-world scenario

How it's useless? I can't see how it affects any online stream.

>encoding on GPU
Retard here, why does encoding on GPUs result in worse image quality vs CPU encoding? Isn't encoding a task that should scale really well on GPUs since it's highly parallelized?

Attached: d27.png (645x729, 75K)

Not so much issue for regular encoding, it might take bit more time but thats it. The real worry is real time streaming, high latency in real time streaming means either they have to give up quality or put more GPU resources into encoding to make up for the weak encoding, thus consuming more power/have less resource for gaming/etc.

If you got Ryzen 1600/2600 or higher, you can just use that CPU for streaming. GPU streaming is secondary.

He meant encoding with hardware encoders (not actual stream processors) built into modern GPUs, using them you can't change compression settings and those used by the encoder are usually poor because it's tuned for speed.

I'm not a gaymer streamer so why should I care?

Blocks, blocks everywhere.

>Isn't encoding a task that should scale really well on GPUs since it's highly parallelized?
it isn't.
a family of gpus that shares the same x264 encoding chip has the same throughput.
which means that 1060 is as fast as a 1080ti.
on the amd side of things, amd has stonk gpgpu, many of their features are brute force openCL or something similar. In that case, the more SPs you've got, the more throughput you have.
the first part of your question is answered here video.stackexchange.com/questions/14656/why-processor-is-better-for-encoding-than-gpu

To encode video at 60 FPS you can spend no more than 16,6 ms on each frame.
AMD is within the limit and no one streams faster than 60 FPS so what's the problem?

Autism.

call me when nvidia actually reach the quality of amd on encoding

>latency on a bandwith intensive task
holy shit nvidia shills are pathetic

Are you retarded? Linus already tested and AYYMD has the WORST encoding quality while Nvidia is the best, Intel in the middle

>encoding latency
wow it's fucking nothing
All that means is that the video will appear 0.5 slower on your screen. But since you can only have whole frames, it will appear at the same time as nvidia's.
What's even the point of this?

well if LINUS tested them.....

forum.beyond3d.com/posts/2030018/

>The problem seems to stem from the fact that AMD's VCE performance has regressed across generations, comparing R9 380/285 and which uses VCE3.0, tops out at 128fps at 1080p H264 Quality, FuryX drops to 77fps and RX 480/470 drops further to 55fps.

TOP KEK, REBRANDEON DECELERATOR

GARBAGE IN ENCODING QUALITY, GARBAGE IN LATENCY, GARBAGE IN PERFORMANCE

Few people use hardware encoders even to stream. It's a last resort measure if your CPU is complete garbage and can't handle x264. Streaming services usually have bitrate limitations ( such as 3 to 6mbps for Twitch) so you need an encoder that can deliver good quality with low bitrate and hardware encoders are garbage at that especially for scenes involving a lot of movement with their b-frames out the ass.

I only use NVENC for local recording at 30+mbps because it has almost no performance impact on the game and my 3570k can't deal with good quality 1080p x264

>parsec
isnt parsec nvidia sponsored?

>Can't accept bitter truth, must be sponsored

TOP KEK, AYYMDPOORFAGS

>amd consistently being on top on every streaming bench so far
>parsec comes saying otherwise

who would have thought of that

As said here , could you first answer why should we care instead of posting more shit, Shill-senpai?

Attached: doohickey-uncoupled-thingamabob-duct-tape-computer-repair-cat.jpg (500x361, 47K)

I think you mistake amd vs intel for x264 eoncode streaming(specifically on higher quality and bitrates). This is completely different

Are you fucking retarded?

Coders that already tested and documented how bad AYYMD's encoder is in performance every generation

github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-285

github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-380

github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-fury-x

github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.4#rx-470

i bet you dont even understand what the numbers means you just copy paste from beyond3d and shitpost

>hardware encoding

This is like a gimped leper race, might as well add Intel's Quicksync since it's the same garbage.

This is the only way Nvidiots can feel relevant since Nvidia doesn't have a CPU worth powering on, much less using.

judging by his wife, linus is the best for that job.

AMD cards also share fixed function hardware for h264 encoding

Thanks doc

Stop using shitty software/plugins, user.
AMD VCE and NVENC are fully customizable on OBS

Just because you can choose profile it doesn't mean it's customizable.
No scene group uses hardware encoding because quality is pathetic when compared to CPU encoders.

Nobody is talking about heavy duty or professional encoding - we're talking about recording screen/game footage. Why are you even mentioning scene here - of course they are not going to use it. It's for average Joes that may not have good enough CPUs that either stream or record their shit to make videos. It has reasonable enough settings to tweak for this kind of basic use

Attached: nvenc.png (802x394, 13K)

That is what I said at the beginning.
Hardware encoders are tuned for speed, they produce poor quality/size ratio when compared to what you can achieve with software.
Of course you can record good quality video using hardware encoder but you need 5-8x more bitrate.