Long videos crammed into a gif/webm

Does anybody have more like this?

I remember I've seen something posted a while ago. It was a large resolution gif with many small video tiles that moved left to right. If you kept track on one of these tiles you could watch the entire episode. The gif wasn't long either, it worked smooth thanks to the loop.

Attached: azumanga_e16.webm (640x360, 2.92M)

Other urls found in this thread:

boards.fireden.net/a/thread/133253121/#133267281
madokami.com/t0jcfc.zip
my.mixtape.moe/jyqdmx.zip
my.mixtape.moe/xdioax.zip
avisynth.nl/index.php/High_bit-depth_Support_with_Avisynth#AviSynth.2B_Native_Deep_Color_Support
twitter.com/SFWRedditImages

weebs OUT OUT OUT

Attached: 1510735357522.jpg (600x480, 87K)

Attached: 1142011023463.gif (129x78, 169K)

>green saber

absolutely discusting

Attached: 1497553274907.png (777x441, 111K)

This is not the request board, this is Jow Forums. Fuck off.

ayyy lmao

Weebs are the most 'normie' subculture though

Attached: 1518464005475.jpg (897x445, 202K)

Neat.

Attached: bane.webm (1920x1080, 2.92M)

We called it ZoeTropes on /a/, the user even went and shared the script one time. Too bad I could never get it working on Windows 10 now that I've got 16 threads to throw at it.

Attached: 1533539809163.webm (1920x1080, 2.91M)

>the user even went and shared the script one time
Do you still have it? I can't find it in the archives.

that's pretty cool
don't have more (ask ), but it probably wouldn't be too hard to write a little shell script with ffmpeg to make 'em

>bake subs
>split video into equal-length chunks
>add an indicator for when it loops at the end of each
>tile videos into the final product

meant to say ask but whatever

polite sage

Thats video games.

boards.fireden.net/a/thread/133253121/#133267281
From 2015 and I got working on my W7 laptop. Didn't get it working on W10, but I'm working on the level of a script kiddie so you might be able to figure it out.

>not moetropes
missed opportunity

>madokami.com/t0jcfc.zip
>Server not found
Damn.

Attached: 1534216442090.jpg (337x290, 56K)

>shitting on Azumanga Daiho
You don't belong here, never post again.

So the secret is lost forever then

I'm not home right now but I still have my W7 laptop around. I'll check when I get home.
What's a good free file host now that all the pomf clones are gone?

Mixtape.moe ?

Here. I think I had to install CCCP cause I can't into filters. I am also remembering that when I went to try this on my W10 desktop, I used the latest CCCP and avisynth MT available and not these that I used for my W7 laptop, so that might be why I failed.
>my.mixtape.moe/jyqdmx.zip
>Install Video filters or CCCP
>Install avisynth
>Replace avisynth.dll with the MT avisynth.dll
>Drag video (no tricky stuff in the filename) into Zoetrope.bat

>Replace avisynth.dll with the MT avisynth.dll
Mind explaining this step? I've already had avisynth installed before, so I just installed the CCCP in that folder, and when I drag a videofile into the .bat I just get a split second command prompt, and nothing else happens.
I assume it's because of the dll, but I'm not sure where it's supposed to go.

That guy goes in system32 or SYSWOW64. One of them should have avisynth.dll and you can replace that. Make a backup of you want.

>split video into pieces
>combine them in a mosaic
>convert result in gif/webm
use ffmpeg (cli) or kdenlive (gui)

It was a request yes, but I mostly wanted to start a discussion about how these are made.

sage

Some of the ffmpeg filters I guess are involved don't thread too well, so the performance will be sluggish. It's better to just use Avisynth, or even better Vapoursynth because of cross platform compatability!

>kdenlive (gui)
You're not fooling me again, Jow Forums.

Why would you sage your own thread when you just said you wanted to start a discussion?

How Vapour compares to Avi and Avx (Avi's Linux port)?

I got it working and I'll repack it after it finishes and I'm sire of the result. Don't use CCCP; sorry about that. I think Haali is interfering with the LAV splitter. If you can, you can try to run the CCCP installer to remove Haali, or remove it entirely and install 32bit LAV filters. The rest should be the same.

If the readme seems condescending, know that I plan on sharing this with others, so it's not directed at anyone personally.
>my.mixtape.moe/xdioax.zip

Because I felt like bumping the thread just for this argument would be bad.

Attached: 1447106052749.webm (1920x1056, 2.92M)

Is this compression?

Lossy compression, yes.

I downloaded this movie years ago and I still haven't watched.

well, now you have.

This is cool, never seen it before but I can imagine how it would be done in avisynth. You're just segmenting the video into clips, stacking the clips in a grid, and then scrolling the entire frame if you want to achieve that effect.

I have been curious about novel uses for Jow Forums-compatible webms. This kind of thing is the most interesting I came up with though.

Attached: Dreamcast.webm (2048x1102, 2.38M)

I'm lazy, I haven't watched a single show in 8 months. help

Set Audio Renderer to MPC Internal and watch at 1.2x speed. Helps with my guilt of spending so much time.

Pretty easy with vstack and hstack on ffmpeg. I've done it.

You got it working too?

Attached: 1467194627646.png (691x724, 732K)

Vapoursynth's performance is generally slower but there are a few features that make it more attractive over Avisynth to me. Namely being able to run it standalone and not having to tinker with threading. Aside from AVS+, Avisynth requires to be installed in system directories which makes it impossible to run several versions of the software side by side. Vapoursynth can be had standalone, which makes it easy to share and experiment with different versions. Vapoursynth threads by itself, so there's no need to configure it on your own with a custom build, unlike Avisynth.
On the other hand, Avisynth has way more filters to toy with. AvsPmod works better, has more features than VSEdit (GUI programs to write and preview scripts for each respectively). Avisynth also supports RGBA, which allows to export transparent video, say render subtitles and overlay them with some other program; Vapoursynth doesn't have that, alpha is only used as masking for specific functions. Avisynth also supports audio passthrough, allowing to reuse input audio on output, Vapoursynth can only deal with video.

I would suggest to invest time in both. For Avisynth, pick AVS+ builds, you can run them standalone and minimize the threading horrors. If you're going to share your scripts, then maybe write them in Vapoursynth. For personal use Avisynth is just as attractive.

Interesting. I use Avisynth for little personal projects from time to time, but I just got a new computer for the first time in 8 years and was considering not installing it in favor of something newer. But if Vapoursynth still isn't all there, I guess Avisynth is still the most capable tool despite its age.

Vapoursynth is getting new flashy stuff though, just because it doesn't have the big ass legacy baggage around it and runs with Python. Python probably is more attractive than having to learn the custom-language Avisynth has.
Additionally, Vapoursynth supports high bit depth formats, compared to Avisynth's crummy 8-bits. AVS+ seems to have solved that partially and ported some of the plugins, however lots will remain incompatible.
avisynth.nl/index.php/High_bit-depth_Support_with_Avisynth#AviSynth.2B_Native_Deep_Color_Support

Could you do that with the audio?
>Splice into 4 second parts
>Overlap all of them

Yes, but nobody would be able to tell what's going on because you don't have as much spatial resolution with your ears as you do with your eyes.

Seems like fun. How do?

Split the audio into a bunch of parts and then overlap them, just like you said.

You could do it in audacity.

Could you possibly use multiple audio tracks in the video? Dunno how you'd iterate through them, though.

Attached: 1534097024506.jpg (434x430, 40K)

That doesn't save anything, it's just the same amount of audio in that case. You're not doing the analog of shrinking down the frame and putting dozens of frames side by side. The closest thing for audio would be mixing dozens of tracks together (and the result would be mostly incomprehensible). Your ear can't pick out one at a time the way your eye can focus on different parts of the image.

Thinking on it, what'd you do is speed the audio up a huge amount and then slow it way down on playback. The problem is that you won't have much audio bandwidth to work with on the output. The image in the OP has 24 cells. If you speed up the audio 12x in order to fit it into a 2 minute webm, you only have about 4kHz bandwidth on the output (assuming encoded at 48kHz or a bit less for 44.1kHz) so it will sound terrible.
Then you are doing something analogous to the tiling video. You are throwing away some information.

Yeah but I meant drag and drop

Whoever invented this was a genius! This is fucking black magic

Attached: 1zP7tHO.gif (700x600, 1.69M)

Kek

This is so fucking pleasing to watch

Attached: thisismyjam.png (197x245, 32K)

>That doesn't save anything, it's just the same amount of audio in that case. You're not doing the analog of shrinking down the frame and putting dozens of frames side by side. The closest thing for audio would be mixing dozens of tracks together (and the result would be mostly incomprehensible). Your ear can't pick out one at a time the way your eye can focus on different parts of the image.
Not exactly. The video version separates each stream spatially. It's not simply layering every clip on top of each other as you suggest doing with the audio. The closer analog in my opinion would be to multiplex by giving each clip some bandwidth separate from the other.
Try resampling down to 20Hz-10kHz (or something along those lines) and splitting in two. Then raise the pitch of the second one x2 (at the same speed) and combine both. You should end up with somewhat more comprehensible streams, one at normal frequency and one at high frequency.

Is this doable on a systemd/linux distribution?

>The video version separates each stream spatially.
But it does so by subdividing the original frame, so each video clip loses the vast majority of its resolution.
>The closer analog in my opinion would be to multiplex by giving each clip some bandwidth separate from the other.
Try resampling down to 20Hz-10kHz (or something along those lines) and splitting in two. Then raise the pitch of the second one x2 (at the same speed) and combine both. You should end up with somewhat more comprehensible streams, one at normal frequency and one at high frequency.
I see. So you are throwing away the same information as in my proposal, i.e. limiting each audio clip to a small bandwidth (it would have to be about 4kHz per track to fit a 24 minute episode into a 2 minute webm). The ear still couldn't pick out 12 individual voices occupying 12 different "channels" (in the radio sense, not the polyphonic audio sense), so you'd need a low-pass filter and a divider (analogous to a radio tuner) to make each clip comprehensible again.

Spreading the clips out temporally rather than in frequency space isn't as similar to spreading out video clips in the XY plane, but it's easier to reconstruct the playback.

is this perfect blue?