Why don't we distribute videos in tiny files like pic related and use a decoder to slow them down to normal speed...

Why don't we distribute videos in tiny files like pic related and use a decoder to slow them down to normal speed? Wouldn't that save a lot of space?

Attached: 1556528189194.webm (294x125, 2.87M)

Very low framerate

You have two options when speeding up a movie like this
a) drop frames to have a reasonable frame rate
b) just speed up the video without dropping frames
With a) you'd end up with a very low frame rate once "decompressed" and while frame interpolation is a thing, going from 1fps to 30fps will only give you unwatchable results.
With b) you won't save any filesize or butcher the quality to such a degree, that it's unwatchable when slowed down. While the movie would be shorter in the sped up version, the more frames you have in a second, the higher the bitrate per second has to be, so that each frame has roughly the same amount of bits as before.

What if we stored movies frames are vertex images instead of bitmap images? would that save space?

vector*

this is troll science tier shit

>he doesn't know how videos work

/a/ once did something like this but was different, it was a big video containing simultaneous scenes. You could watch an entire episode that way.

What if we had some universal pseudorandom number array that could be used as decoder?
computing is cheap right?

if i slow that down to be about 1:20.00 long, i get about a frame every 3 seconds
the speed you playback the video has no bearing at all on the filesize/quality, it's all about quantization (how shitty each frame is) and how many frames there are.
those aren't magic either, they save nothing over a sped-up, high framerate video, if anything they would be slightly worse, as they add additional motion

What if we resize them to 294x125 and then run the video through a blockchain AI neural network plugin in mpv to upscale it to 4k? Wouldn't that save a lot of space?

AHAHAHAHAHHAHAAHAHHAHAHAHHAHAHAHAHHAHAHAH!!!

the absolute state of Jow Forums

we have the technology

anime also has a lot of still frames because lazy japs lol

>thread where the limits of technology are to be explored
>someone asks questions
>HAHA RETARD IMAGINE BEING SO DUMB
>this same person complains that the board is too consumerist

Educate or shut up, faggots.

this has to be bait

The vector frame is going to be super-large compared to how much it takes to store the frame with modern video codecs.

This is not exploring limits, this is just asking honestly dumb questions caused by knowing nothing about the topic.

That's not how video is stored currently. A very basic form of video compression would be: encode the entire first image, then for each frame after that, you only encode the differences with the previous image. Which means that scenes with low action can be stored really cheaply.
Vectorizing might work for some forms of anime, but it wouldn't work with a live action movie, for obvious reasons.

This would save space at cost video quality and people are already doing that...

The position in this sequence, which you would use in place if your data, is going to be as large or larger than the data you are trying to compress.

How about those webms that are split up into multiple screens and each screen plays a section of the movie? I haven't seen on on /tv/ in a long time.

thats called low framerate retard

>A very basic form of video compression would be: encode the entire first image, then for each frame after that, you only encode the differences with the previous image. Which means that scenes with low action can be stored really cheaply.
basically every video codec does that, from simple old pixel-based deltas (which even fucking gif can do), to highly complex motion estimation and compensation techniques (allowing things which only moved to be encoded as motion vectors) which any remotely modern codec uses to great extents
in typical contemporary cases, a video only contains whole frames every 2 seconds or so (called i-frames or keyframes), with all the rest being predictive frames (a delta frame which describes what has changed since the last frame)
you can gain a lot of extra compression by simply spacing out keyframes more, but this comes at a cost. player buffers need to be larger (if you also use additional reference frames), and seeking to random points becomes slower (as to seek to a predictive frame, you must first decode the last keyframe, and all predictive frames from it to where you're seeking to)

-- also, another potential issue with wider keyframe gaps is error handling
ever watched a damaged file or stream and you get weird artifacts which tend to disappear suddenly a second later? when it's on the screen, the predictive frames keep acting on it, causing strange effects, and once the next keyframe arrives, the picture is fully refreshed
if you have keyframes spaced every minute, say, then that damage could stay on screen for up to a minute, instead of a couple seconds
and yes, this is how those "vlc" glitchy looking clips work, what you're seeing is predictive frames acting on incorrectly decoded images

desu my eye can't follow anything above ~15fps

Guys. Guys. (tokes) Like yeah like what if we just... You know, write down a movie WTF and then like see it in our minds and shit. Whoa.

That's called a "book" you fucking zoomer. Go to a library. Do you know what a "library" is?