1. Create a lookup table containing all possible 2 byte sequences

1. Create a lookup table containing all possible 2 byte sequences
2. Take some input file and construct another file consisting of a pointer to the lookup table from step 1 for each pair of bytes in the input
3. ???
4. You have just compressed the file by 50% regardless of its actual contents (could be anything: music, videos, photos, documents)
5. PROFIT!

Pro tip: you can keep repeating step 2 to achieve ridiculous compression ratios

Attached: 1562336376914.jpg (750x669, 80K)

How many bytes does it take to point to a particular 2 byte sequence in the lookup table?

Good luck mapping all possible 2 byte sequences without using 2 bytes or more.

110 gang where you at

wtf i hate math now

wait a second
what if I shuffled my lookup table and then created my file? wouldn't this basically give me unbreakable encryption while ALSO reducing the space taken by half?

epic

Attached: newEraInComputing.png (2092x1780, 1.14M)

What the fuck
400TB to 970B? lmao

text compresses easily

I don't think you can compress the library of congress to 970B or a 7GB movie to 668B

yes you can

hmm, so a single mp3 file is 419 KiB on average?

OKAY
What if we compress by taking a file, MD5 hashing it and then giving the MD5 cracking software an numbered offset of where it should start so it cracks it really fast, "decompressing" the MD5? :)

>an numbered offset of where it should start
wouldn't that mean you also need the source file?

>I call it, the shortcut

Okay then, you don't give it an offset, you just let the computer crack the hash until the sun consumes Earth.
Infinite compression.

>Watches silicon valley once

>unbreakable
Not even close. That's a simple substitution cipher, vulnerable to frequency analysis, aka the most basic of crypto attacks. Essentially an even shittier version of ECB block ciphers and worse than Enigma. Congratulations.

>the old md5 compression meme
Nice bait.

I'm 135, what does this mean?

Of course the MD5 checksum isn't really needed here. You're essentially generating random numbers, hoping eventually to get the file you wanted. However, ALL possible files can be recovered this way, including files much better than the one you compressed.

Imbeciles

Attached: 1537591804574.jpg (2928x922, 588K)

I think it's something along the lines of: high iq and low iq get along and get each other, 110 (slightly above average) is suspicious that he's not included but doesn't know what's going on, and 120 (above average) knows he's not included but also knows he isn't smart enough or dumb enough to participate. I think it's supposed to describe this board.

ty wise sage

Thank you user, but I'm not certain, that's just my best guess

>I think
clearly not hard enough

>replace all 2 byte sequences with a pointer of 4 or 8 bytes
Ridiculous compression ratios of -200%

Can you restore your whole body from your fingerprint?

Yes.

If it was a text file full of AAAAAA,... maybe

How do compressions like zip work

>tfw 144 iq

So anybody who collects your fingerprint can clone you? how so?

You can further improve performance of your "compression" by not performing the lookups but treating the pointers as the actual data.

Um sweetie that's not how it works

>not packing extra data into the lower 3 bits and upper 16 bits of your 64-bit pointers.

>1. Create a lookup table containing all possible 2 byte sequences

why couldn't you just made a thread about peephole optimizers of the superoptimized variety OP?

lmao i remember that thread

Where are the each 2 bytes going to be stored though?

>real time (no latency)

Attached: 1366839773726.gif (400x224, 1.1M)

2 :^)

Obviously not pointers in memory but offsets into the table

>1. Create a lookup table containing all possible 2 byte sequences
>2. Take some input file and construct another file consisting of a pointer to the lookup table from step 1 for each pair of bytes in the input
Is it just me or would this result in an exact copy of the file rounded up to the nearest 2 bytes

You know what the best file compression system is? Deletion.

Delete those fucking files nobody cares about and boom, free space!

High IQ post.

I hope you are using 64-bit pointers LIKE A BOSS

But each pointer could be up to 2 bytes long... and then you will have to also store the entire value table. This might work in some circumstances but by no means acts as useful compression. Maybe on text it will result in slight lowering.

But its a simple enough program why not just write it?

I'm no expert in compression but their website and patents makes it look like they actually do have some serious tech. However, I seriously doubt some of the "benchmarks" and numbers they present. I hope some user can shed some light on how legit this really is.

Nice bait, I'll answer anyway. Compression is usually done by recognizing (repeating) patterns in data and using these to save space. Simple example: Instead of "00000000" we could just remember 8 zeros. This example is a lossless compression (what 7zip and almost any other program is using). Media on the other hand often uses lossy compression. A human doesn't care for extreme details, they are ignored. Now since the proposed algorithm is working on arbitrary data we assume it's a lossless compression, since lossy compression would destroy most files. Of course with lossy compression their numbers are possible, but that would be the equivalent of cutting of body parts in an extreme weight loss program, in theory correct but also extremely retarded. So lossless compression it is. Practically speaking compression rates vary quite a bit, it is very dependent on the data that is compressed. Text files often have a lot of redundant data, but movies and music are already using compressed file formats so there's not a lot to do. That said reducing your data by more than 99% is very unlikely. Now to put their numbers into context the image also provides data for Zip and 7z, which are two decent lossless compression variants. The files sizes of less or equal to 1.2kb are just unrealistic in comparison. The only sane explanation is that it's some kind of static link to the data, like a shortcut, which is not really compression.

BASED
AS
FUCK

>1. Create a lookup table containing all possible 2 byte sequences
that's gonna be one big ass file, but you only need one of those so that's ok

>2. Take some input file and construct another file consisting of a pointer to the lookup table from step 1 for each pair of bytes in the input
>a pointer
you mean many, many pointers
so a pointer for each possible byte combination
you've come full circle and the first step is useless now

Make the pointers variable length depending on the frequency of their appearance in the file and you got Huffman Coding.

It's a pretty pleb-tier compression method used only in Programming 102 classes and JPEG images.

>There are 65536 unique 2 byte sequences
>It would require 65536 pointers
>Which means you could turn every 2 byte to a pointer that's 2 bytes long
Holy shit user you're a genius.