No AV1 Thread?

AV1 Thread

Attached: 1920px-AV1_logo_2018.svg.png (1920x1066, 59K)

Other urls found in this thread:

drive.google.com/open?id=1_YUQrIK-v3rLk0ScfxTKx-3BayAfWg8l
developer.android.com/guide/topics/media/media-formats
support.apple.com/en-us/HT207022
apple.com/apple-tv-4k/specs/
developer.amazon.com/docs/fire-tv/device-specifications.html#media-specifications
reddit.com/r/netflix/comments/9r75hm/netflix_starting_to_use_hevc_codec_for_hd_titles/
twitter.com/intelnews/status/1126251762657124358
aomediacodec.github.io/av1-avif/
github.com/Kagami/go-avif
joedrago.github.io/colorist/
engadget.com/2018/03/28/google-apple-intel-av1-netflix-amazon/
sonyrumors.co/sony-a9s-imx310-sensor-has-16-bit-global-reset-shutter-on-chip-dual-gain-hdr-96db-dr/
twitter.com/AnonBabble

is it usable yet?

Yay, codec. Exciting.
Sorry. That's all I got for now.
Youtube seems to think so.

It really doesn't seem so, it's been in experimental for what, 2 years now? It's still a heap of garbage that requires farms to run realtime

pretty usable. failfox seems to fuck up decoding performance though.

best to playback locally. encoding is slow

It is, I'm about to post some samples.

Attached: 1531872285358.jpg (640x480, 67K)

well then, do it

I'll share my first comparison for now, with the x265 version being 93% the size of the AV1. AV1 having a slight bitrate advantage.

Opps

drive.google.com/open?id=1_YUQrIK-v3rLk0ScfxTKx-3BayAfWg8l

pretty good
is that the latest encoder?

Yeah, in the past month.

I posted another sample at the same link, with AV1 being 96% the size of the x265 version. Meaning x265 has the bitrate advantage in Sample2.

drive.google.com/open?id=1_YUQrIK-v3rLk0ScfxTKx-3BayAfWg8l

x265 settings -crf #(varies to match AV1 bitrate) -slow preset
AV1 settings -crf 30 -cpu-used 4

-cpu-used 4 is kind of like Medium in x264 or x265 as far as speed tradeoffs. That being said I still encode at around 0.2fps @1080p.

Attached: __classic_sonic___expressions_by_suzyhadow-d58f2iw.jpg (1111x719, 117K)

libaom -crf 30 == x265 -crf 18 in terms of quality

Interesting, I was using CRF-24.5 and 27 for my two x265 samples to get the same bitrate as libaom -crf 30

AV1 = AVI

Think about it

it's useless, hw support will be like in 5 years

AV1 is a codec, while AVI is a container.
You theoretically could put AV1 inside of AVI, but I doubt anyone is going to spend the effort defining how that should look which such an old container.

Attached: brainlet4.png (657x539, 110K)

So is there a fast high quality encoder now?

AV1 is a format/standard but you are generally correct

SW support is pretty good and HW is not far off

no

From my testing, it's like 100 time slower to encode AV1 than h.264 using FFMPEG. They better come out with a hardware encoder/decoder chip/card.

Crap. Wasn't half the damn industry behind this codec? Didn't they take hardware into account & don't we have copious amounts of processing power on gaymen cpu/gpu or better?

Yes, it's fuckloads slower than that, especially because it's compressing it a fair bit more. But libaom (the reference encoder/decoder) is slow as fuck. Optimised software encoder/decoders are being worked on, and they're a lot faster. dav1d was the name of the decoder. I forgot what the encoders were called, but there were multiple.

It was released a year ago, so it takes time. H265 came out in 2013, and it took until ~2016 for the first HW chips to make it into GPUs.

its DOA, HEVC is the standard

The other AV1 encoders are pretty much trash when it comes to compression, with them barely doing better than x264. Libaom is the only thing that is able to beat x265 and Libvpx (VP9).

>is it usable yet?
Decoding (dav1d) is starting to get good.
Encoding is still awful.
If you're Google or Facebook, it's probably worth using. Otherwise it's still a while away.

>I forgot what the encoders were called, but there were multiple.
The only ones I'm aware of are libaom (AOM), rav1e (Xiph) and SVT-AV1 (Intel).

>HEVC is the standard
Where? The patent situation around HEVC is so terrible that most companies won't go near it. Even if you're willing to pay to use it, there's no consensus on WHO you would need to pay - a bunch of different groups have competing claims over parts of it.

>Where?

On Netflix, Amazon, iOS, Android, AppleTV, FIreTV

>Netflix
As far as I can tell, Netflix uses VP9 and are a major supporter of AV1. Do you have a source for them using HEVC?

>Amazon
I have no idea what codec Amazon Video uses

>iOS, Android, AppleTV, FIreTV
What?

>android
developer.android.com/guide/topics/media/media-formats

>iOS
support.apple.com/en-us/HT207022

>AppleTV
apple.com/apple-tv-4k/specs/

>FireTV
developer.amazon.com/docs/fire-tv/device-specifications.html#media-specifications

>Netflix
reddit.com/r/netflix/comments/9r75hm/netflix_starting_to_use_hevc_codec_for_hd_titles/
Netflix only uses VP9 in web browsers

It also took years before x265 (or any other software encoder) was anywhere near usable speed.

Those are cases where it's supported at all - outside Apple's bubble HEVC is far for the dominant format.

It is quite satisfying to see MPEG et al. to keep fucking up patent/licensing shit. I really hope AV1 wins out.

Yeah, OP here and the one posting the samples. I was encoding with the AOMENC releases back in 2017 and it was the slowest thing you have ever seen. The current encode speed of 0.2fps is blazing fast in comparison of the 0.02fps I remember back then. The encode speed has also improved greatly between Nov 2018 and now. It's still obviously slow though and still needs better multi-thread and assembly support.

>failfox seems to fuck up decoding performance though.
Does your build use dav1d or aomdec for decoding?

>hw support will be like in 5 years
It's supposed to come in 2020.

Not them, but I have perfectly fine AV1 performance through dav1d using Firefox 66.0.4 on Linux.

And all the companies behind those are members of AOM.
Not sure if Netflix already experiments with AV1 video, but they were the first to publish AVIF test images.

Threadly reminder that avif lossless is the best image compression out there, killing flif and absolutely annihilating png

Adobe After Effects still exports to avi

You're the standard idiot

>It was released a year ago, so it takes time.
I am confused why this was released at all if the reference encoder was almost too slow to even encode some minutes of video.

Were they completely betting on hardware encoders or something?

I thought commercial h.265 encoders were useful some years earlier?

Full fixed function hardware decoding will come in Nvidia's 7nm GPU and possibly Intel's Tiger Lake SoC

twitter.com/intelnews/status/1126251762657124358

AVi is usually still used these days for lossless content so that does not surprise me. Along with NLE type stuff.

it would be satisfying to see mpeg and fraunhofer people burn along with their legal consult so that all of their predatory patenting would get released into public domain

> avif
> killing flif
I thought they didn't even want to cover 16 bit color spaces last time I heard anything about it. It's a fucking giant super complex image format (sure, with its advantages on the encoder), but if they continue with that plan, it won't even properly store modern images actual cameras have been producing for a while now.

Flif does.

Guess you don't remember the HEVC reference encoder in 2013.

People who need 16-bit per channel are probably just going to stick with TIFF or PNG. AVIF is just a still version of AV1, so I guess it's stuck with 8-bit 10-bit and 12-bit. Maybe they can add more later.

I'm encoding and decoding it way faster than flif right now. It takes less space (lossless) than flif right now. With some samples it even beats q99 lossy jpgs.

For my meagre needs it's the clear winner.

I don't. Never tried it. Was it also crappy?

Even if so, why would that be a good idea for AV1 to repeat - couldn't they at least be ready with a mediocre encoder for the format they've been designing?

Apart from that, I'm almost certain some commercial and research encoders were pretty good one year later already.

cont.
By decoding I mean the time in ms it takes irfanview to load avif/flif.

aomediacodec.github.io/av1-avif/

THIS KILLS THE JPEG & WEBP GARBAGE

> People who need 16-bit per channel
This is already a feature of mass produced consumer cameras and it seems rather obvious to me that this will also make it into most smartphones.

Loosing that color information is crappy, it's the same line of thinking that led us down the path of lossy compression only image formats. Sucks ass. The main thing I want from a new image format is actually to not hit that situation again already.

> Maybe they can add more later.
I don't get why they fucked this up. HEIF and FLIF will do 16bit colors.

> I'm encoding and decoding it way faster than flif right now
I guess that's certainly good.

> It takes less space (lossless) than flif right now.
Can't be lossless if it reduces 16 bit color information to 12 bit, unless you only work with With some samples it even beats q99 lossy jpgs.
Would be surprising if it wasn't; every modern format can do this for some samples.

What tools to use to create AVIF images, which accept RGB input?

For those interested, here's win binaries to encode avif:
github.com/Kagami/go-avif

>yet another fucking image encoding format
Alright, time for the "dumbass check"
does it have different formats for default lossless and strictly lossy that mimic the situation PNG/JPG are in currently?

colorist also has support
joedrago.github.io/colorist/

if you have an up to date windows 10 and install the av1 plugin from the app store you can open avif files in paint btw

No, it's all in one just like flif

>you can open avif files in paint btw
I just did. Insane.

Also if you change the extension to .heic it will show thumbnail in explorer.

Are you talking about RAW when you say 16-bit? As that's not even a picture in the traditional sense.

and stamped
You can go now

Attached: .png (479x180, 28K)

FLIF is inherently lossless though. Lossy FLIF is more akin to lossy optimized PNG.

>>yet another fucking image encoding format>yet another fucking image encoding format
Part of the process of making a video codec is making an image codec, for intra-frame compression.
So considering they're already made it, they may as well allow it to be used separately.

Why would you split it though? It's not the same situation as with WebP, where they combined two completely different compression algorithms into one format.

>I don't. Never tried it. Was it also crappy?
It was just slow. Then x265 took over and started improving over the reference, took until 2015 for me to start regularly encoding with it but it was still fairly "cutting edge".

>Apart from that, I'm almost certain some commercial and research encoders were pretty good one year later already.
AOMENC (the refrence encoder) has improved greatly over the year. I can probably crank out 1fps @1080p on my FX-6300 by having 3 encodings go at once. Still needs improving.

>AV1 around the corner
>4chen still using vp8

Attached: 1557309720742.jpg (607x607, 28K)

xvid

>This is already a feature of mass produced consumer cameras
My fucking sides.
Modern cameras produce like 8bit gamma-encoded images and 14 bits linear images at most.

this pops up every thread about image encoding
default lossless + strict lossy is a perfect dynamic between png and jpg. You can know that the raster of a png will be conserved no matter what, and you know that jpg will always be lossy no matter what
you get a [extension of your format] and you can't be sure if the raster is lossless or lossy at a glance, you need to fucking dig in the metadata to know for sure.
It may seem senseless to think that this would matter, but it does. Immensely.
Not one fucking standard for images in the past decade addressed this, and not one fucking standard for images in the past decade has seen widespread use.

What I want to say is that you do not need anything more than 8 bits and gamma-encoding to store full sized image from any existing camera cheaper than $5000.

>2019
>Xvid capper groups still release regularly
They are pretty much the amish of the internet

>This is already a feature of mass produced consumer cameras
The only cameras I can think of that have native 16-bit are Hasselblad. Hardly consumer.

Why is this segregation so important for image but not video formats?
>Not one fucking standard for images in the past decade addressed this, and not one fucking standard for images in the past decade has seen widespread use.
What about FLIF?

Yeah I mostly work with Canon and I think they are completely 14-bit.

>bitch never heard of lossy PNG compression
Extension never says anything about whether encoding was lossy or not. Having lossless encoder means nothing if data is lost beforehand.

>extension won't tell me if that png is compressed at 0 or 9 either
>extension won't tell me the colorspace used

extensions can go fuck themselves for all i care

So
>Intel Tiger Lake for mobile 2020
>nVidia next gen 2020
What about AMD? Zen 2 most probably not. Zen 2+? Zen 3?

Lossy encoder isn't the same as lossy encoding. You can edit and save a pngquant'd PNG without losing any extra quality.

engadget.com/2018/03/28/google-apple-intel-av1-netflix-amazon/

Attached: 1556669781846.png (854x8000, 3.47M)

>Why is this segregation so important for image but not video formats?
video formats are routinely reencoded in lower resolutions, losslessness isn't as important to the media itself because NORMIES REEEEEEEEEEE
it can be summarized as "raster equivalency dichotomy". PNG will preserve raster (unless you're an asshole) and JPG will not.
The format tells you if the raster from source was (default) preserved or not.
compressing a PNG doesn't change raster content either

>tfw lossless video literally in-existent
why live

It exists, it's just awkward and either horribly inefficient or takes aeons to encode. I'm not really sure why, given it doesn't seem to be that difficult of a problem to make and it would be really handy for editing.

AVC is the standard. HEVC is a meme until AV1 becomes usable.

>video formats are routinely reencoded in lower resolutions, losslessness isn't as important to the media itself because NORMIES
That doesn't make sense. Images are for more likely to be recompressed and normalfags don't give a shit about quality anyway (be it audio, video or images).
I understand being against a one-in-all format, when it comes to WebP. They crammed a special lossless mode in there, that has nothing to do with lossy WebP. Lossless AV1 on the other hand is just an extension of the normal AV1 compression. Not offering lossless encoding would require them to go out of their way and force the usage of transforms and quantization under all circumstances.

>would require them to go out of their way
because their way is "all in one" formats that nobody fucking uses.
It's always the same arguments, always the same questions, always the same answers
DON'T fucking make all-in-one formats, if you want to replace JPG, make a better JPG. Nobody using JPG wants to sue PNG.
Conversely, if you want to replace PNG, make a better PNG. Nobody using PNG wants to use JPG.

or, maybe, we could finally switch to lossless reality when it comes to images

never ever, sadly

All-in-one formats are common among new video formats and since AVIF is based on AV1's intra-compression it's simply an inherited trait.
>because their way is "all in one" formats that nobody fucking uses.
Everybody uses JPEG and the JPEG standard also specifies lossless compression (added in 1993, one year after the initial release). Yes, lossless JPEG is a rare sight, but the mere existence of its lossless capabilities didn't stop it from becoming popular. So why would AVIF fail because of it?
The real reason why new image formats don't get as big as JPEG and PNG is, because those two are simply good enough most of the time. It's difficult enough to replace a well established standard with a modern alternative, but when it comes to images pretty much nobody besides big corporations even cares about the benefits.

No idea why such a small and new project is part of my distro's repo, but it's nice nonetheless.

>It takes less space (lossless) than flif right now.
How low can you get OP's image?

You're right - I was off by 2 bits. It was 14-bit on the consumer end. 16-bit consumer sensors happen only now.

sonyrumors.co/sony-a9s-imx310-sensor-has-16-bit-global-reset-shutter-on-chip-dual-gain-hdr-96db-dr/

Either way, 14-bit is more than 12-bit. And 16-bit colors will happen soon. It sucks that they didn't account for it.

>Decoding (dav1d) is starting to get good.
For 8bit content, which is used by Youtube and anyone else serving AV1 streams, dav1d is pretty much as good as it can get. Hence both Firefox and Chromium shipping with it.

For 10bit content, which nobody has yet made any of, it's far from ready.

>Either way, 14-bit is more than 12-bit. And 16-bit colors will happen soon. It sucks that they didn't account for it.
That's what new revisions of the spec are for. 14 bit could happen if it's deemed useful.

See for example HEVC Range Extension, defined in a revision of the spec, which include 12bit profiles.

is 0.3 in mpv yet? my windows builds haven't been updated since before it came out

Is there a broadcast network or satellite operator using it right now?
(that is where real money comes from)

DSLRs are 14-bit. Prosumer compacts are 12-bit, , still better than your average smartphone camera.