Does Jow Forums like my idea for an image encoder?

Does Jow Forums like my idea for an image encoder?
Is it even original?

Attached: Untitled.jpg (3000x1600, 1.94M)

Other urls found in this thread:

clgiles.ist.psu.edu/pubs/DCC2019.pdf
bellard.org/nncp/
twitter.com/NSFWRedditGif

I don't quite understand from just the picture alone, could you elaborate?

What's that's likely doing is the resize destroys the high frequency components making it easier to compress. The neural net interpolates some of them back.

If you're looking for a same resolution image (not looking to save the downsized one and interpolate back up) it would be better to simply apply a filter to the full sized one.

Just downsampling, and then blowing it up with a neural network made for photo upscaling.
You could get an even smaller file size by downsampling, saving as a lossy format and letting the neural network remove the compression artifacts.

By that I mean that the encoder just downsamples it and the decoder blows it up again on the user's side of course.

I would rather download a few kilobytes more than have my hardware go on full load for a few seconds any time a new picture arrives.
The quality would be better too.

You aren't encoding, you're basically doing a form of VERY lossy compression where you hope that a neural network can magically fix it. You *will* lose data, and you will *not* get back the original image. Maybe something resembling it, sure, but not the original.

Neural network upscaling is trying to find data that makes sense as filler for missing information. You aren't getting information back, you're faking something somewhat sensible.

It is an interesting idea for sure, but I do think that any size gains you get would be massively offset by the time you would need to upscale it. I might be wrong though?

>Is it even original?
Yes
But I dont think it would be that useful. You sacrifice computing power to gain some storage space. It could work in a server that just storage images but not much popular in home computers.
Thats my opinion I may be wrong

it's shit
because you can gain much more by just "increasing transformation size" and better filtering which is exactly what h.265/hevc does

and how large neural network image is supposed to be? 50 terabytes ?

Fuck off, faggot. Don't compress my shit, asshole

You mean the model? It's like 1 gb. I just had this idea half an hour ago while taking a piss, so don't bully me too much for it.

The right pic is actually what the upscaled image looks like. There is a massive loss of detail if you look at it, but it'd be much less noticeable with only 50% downsampling.

to be honest, I dont think anyone on Jow Forums gives a, pearl handled, silver plated, shit, what you do.

For side by side comparison, here's the original png. (1.84 mb)

Attached: Cheetah3.png (1024x768, 1.85M)

Quality 95 JPEG. (271 kb)

Attached: Cheetah3.jpg (1024x768, 271K)

his ear is missing

still missing

Original pic downsampled 50%, saved as quality 95 jpeg, and then blown up 2x. (this would be 73.8 kb)

Attached: Cheetah3neural.png (1024x768, 1.32M)

The text in the corner is completely unreadable.
Other than that it looks ok I guess

nice but what happened to his other ear

Poachers got it.

Nope. This shit has come up a dozen times, it's not even worth it, you might as well apply a pre designated filter to the finished product instead, same quality output.
Not to mention, the decoder and encoder are even more bloatier.

There's also that weird chromatic stuff happening on color edges that is the hallmark of neural nets.
I'm curious what it would do to a screenshot of this thread though.

You store all your media uncompressed?

Attached: neat.png (1168x782, 861K)

>Does Jow Forums like my idea for an image encoder?
It's retarded, you are purposely choosing the dumbest possible algorithm to compress an image and hope that machine learning, which might deliver terrible results, saves your ass.

The point of jpeg is that it compresses "smartly" removing data which is hard for a human to make out in the first place.

Yeah but what about muh machine learning.

Attached: 3463457568568.jpg (638x629, 44K)

Nah, your logo looks like shit.

If you want machine learning build a neural net which extrapolates from a highly compressed jpeg to a better image.
No clue about the results, but it would be pretty interesting.

Reminds me somewhat of modern video coding formats. They manage to look decent even with low bitrates, but the way they mitigate more annoying compression artifacts alters the actual image and they aren't that useful for high quality encoding.

Look into neural network decoders.
clgiles.ist.psu.edu/pubs/DCC2019.pdf

>"smartly" removing data
it's just removing higher frequency components which people are less sensitive to

Yes, that is what I meant by "smart", if you want to delete information from an image it should be less relevant information, therefore the jpeg approach is definitely "smarter" than the trivial approach if downscaling.

>Is it even original
see bellard.org/nncp/

>Is it even original?
Im quite sure nvidia has done something like this.

wh0a that's actually pretty cool. yeah, i'm all for it. I approve.

Exactly, it's lossy compression. It isn't an encoder.

I'm like 99% sure that's how google photos works.

Hey I like it. No more 4 MB pictures.

The ear was lost due to "lossy" compression that suffers from rotational velocidencity.