1. Create a lookup table containing all possible 2 byte sequences 2. Take some input file and construct another file consisting of a pointer to the lookup table from step 1 for each pair of bytes in the input 3. ??? 4. You have just compressed the file by 50% regardless of its actual contents (could be anything: music, videos, photos, documents) 5. PROFIT!
Pro tip: you can keep repeating step 2 to achieve ridiculous compression ratios
How many bytes does it take to point to a particular 2 byte sequence in the lookup table?
Aiden Reyes
Good luck mapping all possible 2 byte sequences without using 2 bytes or more.
Matthew Turner
110 gang where you at
Wyatt Gray
wtf i hate math now
Carter Taylor
wait a second what if I shuffled my lookup table and then created my file? wouldn't this basically give me unbreakable encryption while ALSO reducing the space taken by half?
I don't think you can compress the library of congress to 970B or a 7GB movie to 668B
Aiden Richardson
yes you can
Nathaniel Bell
hmm, so a single mp3 file is 419 KiB on average?
Jose Robinson
OKAY What if we compress by taking a file, MD5 hashing it and then giving the MD5 cracking software an numbered offset of where it should start so it cracks it really fast, "decompressing" the MD5? :)
Lucas Ortiz
>an numbered offset of where it should start wouldn't that mean you also need the source file?
Isaiah Morales
>I call it, the shortcut
Dominic Rivera
Okay then, you don't give it an offset, you just let the computer crack the hash until the sun consumes Earth. Infinite compression.
Luis Young
>Watches silicon valley once
Ryder Reyes
>unbreakable Not even close. That's a simple substitution cipher, vulnerable to frequency analysis, aka the most basic of crypto attacks. Essentially an even shittier version of ECB block ciphers and worse than Enigma. Congratulations.
Connor Watson
>the old md5 compression meme Nice bait.
Anthony Anderson
I'm 135, what does this mean?
Dominic Davis
Of course the MD5 checksum isn't really needed here. You're essentially generating random numbers, hoping eventually to get the file you wanted. However, ALL possible files can be recovered this way, including files much better than the one you compressed.
I think it's something along the lines of: high iq and low iq get along and get each other, 110 (slightly above average) is suspicious that he's not included but doesn't know what's going on, and 120 (above average) knows he's not included but also knows he isn't smart enough or dumb enough to participate. I think it's supposed to describe this board.
Jason Wilson
ty wise sage
William Wood
Thank you user, but I'm not certain, that's just my best guess
Julian Scott
>I think clearly not hard enough
Grayson Johnson
>replace all 2 byte sequences with a pointer of 4 or 8 bytes Ridiculous compression ratios of -200%
Jacob Cox
Can you restore your whole body from your fingerprint?
Jaxon Bell
Yes.
Jayden Hughes
If it was a text file full of AAAAAA,... maybe
John Lopez
How do compressions like zip work
Christian Garcia
>tfw 144 iq
Joseph Powell
So anybody who collects your fingerprint can clone you? how so?
Wyatt Reed
You can further improve performance of your "compression" by not performing the lookups but treating the pointers as the actual data.
Nathan Rivera
Um sweetie that's not how it works
Carter Harris
>not packing extra data into the lower 3 bits and upper 16 bits of your 64-bit pointers.
>1. Create a lookup table containing all possible 2 byte sequences
why couldn't you just made a thread about peephole optimizers of the superoptimized variety OP?
Jacob Martinez
lmao i remember that thread
Grayson Nelson
Where are the each 2 bytes going to be stored though?
Obviously not pointers in memory but offsets into the table
Kayden Moore
>1. Create a lookup table containing all possible 2 byte sequences >2. Take some input file and construct another file consisting of a pointer to the lookup table from step 1 for each pair of bytes in the input Is it just me or would this result in an exact copy of the file rounded up to the nearest 2 bytes
Cameron Ward
You know what the best file compression system is? Deletion.
Delete those fucking files nobody cares about and boom, free space!
Grayson Gutierrez
High IQ post.
Andrew Carter
I hope you are using 64-bit pointers LIKE A BOSS
Jayden Nelson
But each pointer could be up to 2 bytes long... and then you will have to also store the entire value table. This might work in some circumstances but by no means acts as useful compression. Maybe on text it will result in slight lowering.
But its a simple enough program why not just write it?
Evan Ross
I'm no expert in compression but their website and patents makes it look like they actually do have some serious tech. However, I seriously doubt some of the "benchmarks" and numbers they present. I hope some user can shed some light on how legit this really is.
Owen Gutierrez
Nice bait, I'll answer anyway. Compression is usually done by recognizing (repeating) patterns in data and using these to save space. Simple example: Instead of "00000000" we could just remember 8 zeros. This example is a lossless compression (what 7zip and almost any other program is using). Media on the other hand often uses lossy compression. A human doesn't care for extreme details, they are ignored. Now since the proposed algorithm is working on arbitrary data we assume it's a lossless compression, since lossy compression would destroy most files. Of course with lossy compression their numbers are possible, but that would be the equivalent of cutting of body parts in an extreme weight loss program, in theory correct but also extremely retarded. So lossless compression it is. Practically speaking compression rates vary quite a bit, it is very dependent on the data that is compressed. Text files often have a lot of redundant data, but movies and music are already using compressed file formats so there's not a lot to do. That said reducing your data by more than 99% is very unlikely. Now to put their numbers into context the image also provides data for Zip and 7z, which are two decent lossless compression variants. The files sizes of less or equal to 1.2kb are just unrealistic in comparison. The only sane explanation is that it's some kind of static link to the data, like a shortcut, which is not really compression.
Ian Johnson
BASED AS FUCK
Luis Torres
>1. Create a lookup table containing all possible 2 byte sequences that's gonna be one big ass file, but you only need one of those so that's ok
>2. Take some input file and construct another file consisting of a pointer to the lookup table from step 1 for each pair of bytes in the input >a pointer you mean many, many pointers so a pointer for each possible byte combination you've come full circle and the first step is useless now
Colton Smith
Make the pointers variable length depending on the frequency of their appearance in the file and you got Huffman Coding.
It's a pretty pleb-tier compression method used only in Programming 102 classes and JPEG images.
Luke Cooper
>There are 65536 unique 2 byte sequences >It would require 65536 pointers >Which means you could turn every 2 byte to a pointer that's 2 bytes long Holy shit user you're a genius.