Video codecs newer than H.264 and VP8

Do you encode video in VP9, H.265, Daala or maybe even AV1?

Attached: vp9-logo.png (1200x545, 99K)

Other urls found in this thread:

github.com/xiph/rav1e
foro.doom9.org/Soft21/Docs/codecs.rar)
twitter.com/NSFWRedditVideo

>av1
Enjoy your 1 fps

H.265/HEVC personally.
Daala doesn't really exist (to use video format for anything, you need a bitstream freeze/spec), VP9 is bad (or the encoder is), AV1 is not practical yet - frozen last month, no usable encoders on the horizon.

1 FPS? You must be joking. github.com/xiph/rav1e gives you at least 5 FPS.

Dude, 1 fps would be practical (just 36 hours to encode a movie), don't be funny.

>github.com/xiph/rav1e gives you at least 5 FPS
It only encodes intra blocks and does hardly any analysis/decisions. Duh it can be fast but it is also going to be useless because you will get better quality with x264 ultrafast likely. Not to mention a setting that would be similarly slow.

Sorenson Video 3 is the one true codec.

Attached: tech_this_is_g.png (1284x877, 73K)

>github.com/xiph/rav1e
and I mean, """fast""" for AV1 as in, not glacially slow

Attached: 1497052399265.gif (448x251, 1.23M)

>VP9 is bad (or the encoder is)
On my Phenom II desktop VP9 encoding with FFMpeg is only 1.5x-2x slower than encoding H.265 with libx265. The bigger difference is in the CPU use during playback. Playing VP9 with mpv uses 100% of a core while H.265 only uses 40%. I can't play VP9 on my laptop without the fans spinning way up.

I didn't mean speed, I was talking about quality. x265 is better encoder than libvpx (on top of HEVC being a better format). I would be okay with cutting streams into parts and encoding in 1thread mode in parallel to get around the bad threading, for example, but quality is what matters. (I don't care about "freedoms" when doing copyright-iffy things anyway)

>Phenom II
>libx265
Oh, actually, this didn't occur to me at first although I have first hand experience.
x265 is not really usable on AMD K10. It requires SSE4 else most assemly optimizationsa re disabled. So on Phenom II/Athlon II and the like the encoding speed is about 30% of what it should be.

Encoding at 1 FPS was not uncommon for difficult source material in the heyday of fansubbing. When you take into account massive filter chains, IVTC, and some of the slowest encoding options (when it came to those quality autists), you were easily going to end up with 24+ hours of encoding time, once in a while.

I work in distribution for a major US television network. Ask me how we track each and every pirated file.

How is how you track each and every pirated file relevant to this thread?

I use slow settings so I'm always under 1 fps too.

how do you track each and every pirated file?

;)

>Jow Forums still does not support mkv, vp9 or webp

>mkv
Browsers don't support this container. The absence of VP9, though, is genuinely puzzling.

Webm is codec-limited mkv. It's great.

I really want to like VP9, I really do, but I just can't get satisfying results while using libvpx-vp9. I can make it encode as fast as libx265, but then the resulting file will be enormous. And if I aim for the same quality/size ratio it encodes at an unreasonably slow speed. I'm talking ~1-2 fps for SD footage without any filters, thanks to my shitty CPU and because I can't get it to use more than 50-60% of my CPU even when I use the -threads option.

I use x265 for ny TV rips.

Maybe with AV1 we'll get hardware-accelerated encoding.

If you want to encode faster you need to segment the file and encode each part concurrently. Then concatenate the two encoded parts together. That's how Youtube does it, albeit every segment is 7 seconds long or something.

I mostly use h265 because the encoder scales well with CPU cores
the vp9 encoder is a garbage fire in this regard

>our encoder sucks so much it blows, so segment your file like a retard
I'd rather see Google make something that sucks less

That would certainly be faster, but I'm not a fan of it. I usually try to achieve a good quality/size ratio while maintaining a somewhat reasonable encoding speed (but it's definitely my lowest priority). Hardware accelerated encoding isn't really suited for that.

This sounds tedious and I'm not going to do it (especially since libx265 does what I want without additional work), but I find it interesting. Why does it speed up the process? By how much? Are we talking about separating different scenes or simply splitting the video every n seconds, no matter the footage?

CUDA acceleration would be tuneable for quality. Very few compressors (lossy or lossless) support GPU acceleration right now, but I'd expect it to change, especially if AV1 remains popular for as long as VP8 has.

I'm only familiar with encoding H.264 / H.265 via NVENC and again, I wasn't really satisfied with the results. If speed is important, then it's great (personally I use it for screen recording, when capturing CPU heavy tasks). Otherwise CPU encoders just offer better compression.

Never tried hw accelerated VP8/VP9 encoding. I only know it's possible via VAAPI.

>AV1
>1 FPS
Do you own some quad Xenon setup or what?
On decent mainstream CPU you won't see more than 8 FPM at 720p. Yes, per MINUTE.

RAV1E is useless for now, currently it has no configuration options, works only in constant quantizer mode and with reasonable bitrate quality is terrible.

x24 has been shown to still be the best for quality over filesize, so I'm sticking with that until AV1 gets good.
>RAV1E is useless for now, currently it has no configuration options, works only in constant quantizer mode and with reasonable bitrate quality is terrible.
Yeah, it has no interframe compression yet, lmao

Pretty much the fact. Youtube also does this because libvpx ratecontrol is utter shit that mostly doesn't work so they completely bypass it and do some scripted hack on top of it with cqp mode micromanagement.

And for that reason, Google never fixed the ratecontrol - classy.

Nice that the AV1 reference ncoder is based on livpx too. Ugh.

Attached: libvpx.gif (480x270, 1.73M)

>webp
what's even the difference with single-frame webm?

Attached: WebM_logo.svg.webm (554x142, 12K)

not sure for lossy but lossless is completely different from vp8. Probably the reason why it actually beats png and jpeg2000 in compression ratio

google fucked up by first rushing it with VP8 which is crap compression and then refusing to upgrade it to VP9 "because (nonexistent) of support"

>which is crap compression
Crap? It's slightly worse than h264.

slightly worse than bad h264. x264 trashes it. In any case, vp9 is more advanced so that would help to make the image coding actually be meaningfully better and more likely to be adopted.

Although I guess it would be better if the ext image format to spread would be something thoughtfully designed and vetted by more parties rather than google toy idea that they just dumped on the world with ffedback ignored.

AVIF

Personally I think the way forward is probably container + codec split so that metadata and other wingdings don't have to be reinvented for every new compression format.

So I guess HEIF is the right path, either with HEVC or AV1, whichever format works better.

Neither. Daala is apparently better than both on static images.

How do you encode it?

I encode in H.265 beause Intel QuickSync on my cpu does not support hardware acceleration of VP8

*VP9

>Daala
vaporware, unfinished, not freezed format
Also, that's based on its creator's testing, usually with their favored metrics, many of which were mostly used only by them and mayn of which the codec was explicitly developed against.

It's like benchmarking some algoritm with a task it was explicitly trained on.

>vaporware, unfinished, not freezed format
Which happens to be better at encoding still images than anything else.

Why is .wmv so slow to decode? When I'm watching an .wmv and need to skip to the "good parts", classic media player or vlc takes a few seconds to render the frames. MP4 on the contrary is pretty fast.
Btw, I got an 2011 amd 2.8 GHz and a 750 TI.

maybe.
Also, I followed the development of Daala since like 2010 and was really hyped for all the experimental features. I know it quite well.
Lots of its features ended up not working (intra prediction failed for example, which is extremely important part for image codec).
You have to be careful whean reading their demos, those are promo material and might raise unsubstantiated hopes.
Maybe you don't recall, but they were hyping Theora like this too. Many people fell for Xiphmont's marketing and thought it is comparable to H264, while it was an absolute joke in fact. And they never even finished the release that was supposed to have all those great improvements in the end, just abandoned the code pretty much (to focus on Daala and VP8, which was sound, but still - people hyped for the Theora improvements got left out int he cold. Similarly, Daala was mostly abandoned as they switched onto AV1 which has limited number of its features, but might actually work better in other things).

long video with time-differential encoding and no keyframes?

Xiph are reputable.
>hype
Never.
>abandoned
More like, adequately choosing where to invest time on, for maximum result.

That's not how it works, retard. Splitting a video into small parts is a consequence of DASH and has nothing to do with the vp9 encoding itself.

nice super computer you got kiddo

In video compression, there are two ways to compress frames: intra-frame compression and inter-frame compression. Intra-frame compression is how one frame is compressed by itself and is essentially an image file (heck, if you really want, you can create a video format that uses jpegs for intra-frame compression) and is mainly used for key frames. Inter-frame compression is basically using adjacent frames. A very basic example is when you optimize an animated .gif and subtracts color in subsequent frames that are already present in previous frames.
Modern video codecs utilize both techniques. What webP does is take the intra-frame part of the format and use it for a stand-alone image file.

So you were not around those disingenuous Theora pushing. Those claims of it being good enough and competetive were rubbish and they had to know that knowing the difference in technology level.

There is rooting for royalty free video and there is borderline lies. It was like politics in situation when you can't say the truth, nothing personel kid etc.
I don't bear grudge, but you have to be aware of reality.

Clearly you don't know libvpx. Its threading is really bad so you have to do this for real, when trying to use a chip with many cores.

You don't need to with x265/x264 of course.

I haven't seen lies in xiph demos, ever.

Isn't libav supposed to be faster for decoding?

The demos usually just painted rossy pictures of future improvements that made uninformed people think that it has to catch up with h.264 (because +50% compression!!!1one), while they conveniently didn't mention that the competition is much more ahead and also keeps improving.

Laymen took away the impression that Theora is as good as state of the art from that and bloggers were raving how it "makes tremendous strides" with the silent implication that it has to overtake the evil MPEG soon (because Stallman blessed it or something?)

And Xiph guys didn't stop those myths. But if you ever tried to actually use Theora, oh boy. It was several times outdated crap out of its league, simply put.

For decoding yeah, ffmpeg is much better (libav is dead iirc).
But for encoding which was in question, you need libvpx.

I actually used theora. It was really good.
Almost h264 quality per bitrate, but way easier on the CPU.

How good was Theora?

You lie, are blind, or failed in your comparisons for whatever other reason.

mpeg2 level at best.

You've never used it.
No. It was really good.

fuck off with the baits/trolls (or worse, fanboy shit)

From what I recall the actual format had 16pixel cap on vector length, not sure if it even had half-pel vector precision (it might have). There were no bframes, no in-loop deblocking that could prevent gross artifacting, no intra prediction (or just something very basic like DC) iirc and no vector prediction (so it coded absolute values instead of deltas - ouch!). Obviously, it's entropy coding also could not hold candle to CABAC that h.264 already had.

The encoder lacked adaptive quantization also, if I recall correctly. By the time they added that (1.2, the last release?), it was too little too late and VP8 was here.

The format was basically really stone-age (it was just source code dump of on2's useless obsolete technology in 2003 and five years later it obviously was even more behind) and hopeless.

It was based on vp3. It did quite well, for something based on vp3.

>for something based on vp3
Well yeah, that was the problem. The format just didn't allow anything very good, it couldn't possibly beat xvid (mpeg4 asp), it was likely even worse format than old mpeg2 from deep 1990s.

>it was likely even worse format than old mpeg2 from deep 1990s
vp3 was better than divx/xvid, which in turn were better than mpeg2 (by far).
Theora could have surpassed h264 quality if they put more effort on it. But, obviously, it made no sense to work on it anymore when vp8/webm happened. So they moved on.

>xenon
Go back to

>vp3 was better than divx/xvid
no
>Theora could have surpassed h264 quality if they put more effort on it.
complete bullshit, dreamer


Answer this: If it could have surpassed H.264, they would not drop it for VP8 which is inferior to VP8 - basically its limited copy. VP3 was precursor to On2's VP8, VP7 crippled (h264 spec rippoffs), VP6 (mpeg4asp ripoff, not sure how good), VP5 and VP4. Yeah, makes sense for the technology to be able to compete with H.264. Absolutely.

Seriously, are you trolling or you just unwilling to admit that some open source project can be sucky? Anyway, I think I'm done with you.

Attached: uwot.jpg (1280x720, 136K)

>they would not drop it for VP8 which is inferior to VP8
And I meant " they would not drop it for VP8 which is inferior to h.264" obviously

>Answer this:
Do you know what else is based on vp3? vp8. And thus vp9. And thus av1.
Do you unironically think av1 is worse than h264? Do you?

Attached: 1533442967723.gif (234x320, 2.58M)

>vp3 was better than divx/xvid
Doom9's 2002 shootouts (foro.doom9.org/Soft21/Docs/codecs.rar) says otherwise.

>2002

What would be a good year to benchmark VP3?

The year theora forked it.

No point in using VP9 since AV1 is a superior format and will be fully useable by early next year.

H.265 is a legacy format by now, the patent pool mess killed off any momentum, and with AV1 being supported everywhere it won't even be able to hold on to the 4K niche.

Daala is dead, there was talk about turning it into a image format but with AV1 based AVIF coming, that's dead as well.

It's not that slow, but yes it is currently SLOW. Which is logical, the bitstream was frozen just a month ago and before that there's no point in doing heavy optimization.

As is the case with all other encoders worth using, the performance hot spots will be rewritten in hand-tuned assembly, eventually it will be ~10-20% slower than x265 due to it using more complex techniques.

so.. what exactly AV1 is supposed to replace, in terms of quality and size?

Does anybody use that Eve VP9 encoder? I know you have to pay for it, I was just curious.

Shit b8 m8
Also VP9 is literally copied from draft HEVC specs. Tthings "taken over" from older VPx codecs made their way into VP9 because VP8/VP7 were copies of H.264 and HEVC has some thing form H.264.

Attached: 1437520119752.gif (285x287, 131K)

>Daala is dead
Might easily be AV2. It's just way too different and so it needs further research.

Spotted the MPEG shill.

>will be fully useable by early next year.
You'll be disappointed, writing a good encoder takes more like 3 years since format is finished (june(july 2019).

Or it also takes infinity if you are google/on2, but let's hope the other AV1 pushers fix that.

lame pseudodefense doesn't change thruth kiddo

Attached: 1437474876627.gif (480x270, 1.57M)

It gives noticeably better compression than x265 and has less CPU demanding decoding.
Additionally it's free as freedom so potentially it could replace pretty much every video codec.

Everything in terms of video.

It beats the competition in everything except encode time where it's expected to end up being ~15-20% slower than HEVC, it's royalty free which means it will be the new de facto standard for video on the web, it has hardware support from all the major manufacturers, all the streaming giants are members and have been active in the development (Google, Netflix, Amazon).

Eventually even the pirate scene releasers will adopt it and leave h264 behind.

what about quality, when compared to H264? will it be better at similar bitrates?

>less CPU demanding decoding
Doubtful.

>has less CPU demanding decoding.
Wat? on my quadcore, I need about 30-50% cpu to decode 1080p24 HEVC. Decoding AV1 gives 6 fps on a single core (threading doesn't work), so that makes AV1 at least 2x slower. And that'S with HEVC not being well optimised in ffmpeg.

BTW the AV1 format is admittedly more complex, that was design goal, so it can't decode faster.

>the pirate scene releasers will adopt it and leave h264 behind.
for that they will have to first meet the promised quality targets, which is likely years away.

I remain unconvinced about the claims of >beats the competition in everything except encode time

Fully useable does not mean perfectly optimized, the first x265 release was awfully slow, but 6 months into development it was useable.

AV1 IS NOT OPTIMIZED AT ALL YET YOU DUMBFUCK, OF COURSE IT'S SLOWER THAN SOMETHING THAT GOT OPTIMIZED TO SHIT

Quality got better than x264 in ~2 years. Some usages sooner, some later.

Actrually it is. Even before they froze format, libvpx had maintained assembly optimizations. It would be EVEN slower without them.

>libpx
I meant libaom. Fuck you ignorant fanboys, arguing with your made up stuff is tiresome.

I'm not buying the 'less cpu demanding' decoding either. That said, it has been developed hand in hand with hardware manufacturers to be very effectively accelerated.

Also no point in drawing any conclusions from AV1 cpu decoding available today, like with the encoder, that shit is very unoptimized as of yet.

>AV2
Will here be one? How far are we from the ultimate video codec where the diminishing returns make it impractical to replace it?

>That said, it has been developed hand in hand with hardware manufacturers to be very effectively accelerated.
Yeah, hardware decoding shall be no problem. It might need more transistors than VP9 but why not, there's space on chips.

For CPU decoding I might need a new processor (hardware decoders will take time) probably, but ffmpeg's decoder should actually be decently optimised because there are people rooting for vp9/av1 there while nobody feels like writing SIMD ASM for HEVC sadly.

I'll keep encoding in x265 though until quality of whichever AV1 encoder is at least slightly better (doing high quality high bitrate stuff, no minifiles).

>(doing high quality high bitrate stuff, no minifiles).
When doing high bitrate stuff, I see very little quality difference between h264 and HEVC, not enough to warrant the extra time it takes to encode HEVC.