Just compressed 75GB to 28GB. Feels good man

Just compressed 75GB to 28GB. Feels good man

Attached: Screenshot_20180420-000544.jpg (1091x111, 42K)

Yes, nothing weird with "modern" compression.
Well, for compression from the last two decades.

Pirates used to compress 5GB-6GB DVD games onto a 700MB CD without even ripping anything.

I invented an algorithm that can compress 2GiB to 8 bytes. It takes a huge amount of time to actually compress something like that, so if chinks are still chinking away at NAND flash prices once we reach CPU singularity, I'll have a solution. The second flaw is the set of 2GiB blocks of data that it could compress into a single 8B unit is rather small, but compressing it to a few hundred bytes would still be very impressive.

how in the fuck would that even begin to work?

You upload the file on a cloud service and you store a code in those 8 bytes that tells you where to get the URL to the file.

I remember playing some russian cracked version of fallout 3 that was a 700mb .exe file when I was in high school. it blew my mind.

technically correct is the best correct

The 8 byte unit contains 2 4-byte integers, one for the expanded chunk's size, one for the offset it has from the first digit of pi (Yeah the maximum chunk size would be 4GiB, in a prototype implementation I used signed ints and then I was too lazy to change it afterwards, so now I always say 2GiB by accident).
Fuck you kindly

so you're telling me that I can give you a file that is 2 gb and you can squash it into 8 bytes and then reverse it? or theoretically reverse it?

that wouldn't work for arbitrary 2GiB inputs, the search space is too limited. and is pi even proven normal?
with a limited number of inputs, it's at best dictionary compression

Are you the πfs guy or is your project unrelated?

>one for the offset it has from the first digit of pi
Who's to say this offset wouldn't be just as big or larger than the file you are trying to compress?

kek kek i lost everytime

I had russian american mcgees alice archive. Extracted to 700mb cd. Was less than 50mb. Had some kind of auto extracter that took ~1h to extract on 1.7ghz celeron.

Yeah, if said file can be found in the first "8 GiB" of pi, which is, as I said, unlikely. But theoretically it's possible.
As I explained with the limited set of chunks of said size. As for pi being normal or not, I'm sure there's a better constant for this whole thing anyway.
Not the same guy, but I did get inspired by πfs. I thought having a stream compression tool like gzip would be more useful than a whole FUSE fs.
Yes, limited set of chunks, explained multiple times above.

Okay finally I get it. Funny and interesting but nonetheless retarded.

a 31bit number can only reference 2,147,483,648 different files, while there are 256^2147483648 possible 2GiB files (not sure what number it is, as my calculator can't handle numbers that large)
which is proof that your system cannot work with all 2GiB files (or even any significant amount of them)

Congratulations, you just assessed what I already said like 4 times now.

there's some slow-as-fuck compressors that do tricks like uncompress detected compresses streams and compress them again with stronger algorithms, and use content-specific algorithms for specific kinds of data
recompressed stuff takes much longer to 'decompress', as it involves decompression, then recompression/repacking to get the original file back
i've only ever seen these uses in game repacks

Can you compress all of a person’s memories down to 36K and send the data through a wormhole?

Attached: image.jpg (592x1000, 209K)

If they happen to think about pi a lot

haha

Attached: Tt8h5yvN_400x400.jpg (400x400, 18K)