Hey guys. I posted here a couple months ago about my project currently being developed, and it got a lot of interest...

Hey guys. I posted here a couple months ago about my project currently being developed, and it got a lot of interest. I came here again to announce it's almost done, and should be released as an open-source project in the next couple of weeks.

github.com/MarkMichon7/BitGlitter/

It allows you to embed data inside of images and videos. If you can host a picture or a video somewhere, you can host a file there.

The color values are the carrier, not the byte data itself, so compression/distortion won't affect transmission. Each of the frames have numerous safety checks and mechanisms built in to ensure file integrity. Both picture and video output is supported. I include 5 or 6 default color palettes that come with it, but you can make your own custom palettes to use as well (and receiving computers will understand how to read it). Frame geometry is completely customizable. AES-256 encryption of payloads is optional.

I'll post a couple videos next, showing how it works. This is destined to become a Python library soon, in which anyone can use it either as a standalone program, or can integrate it into anything they want. Two functions are all you need to use to read and write streams.

Ask away if you have any questions.

Attached: 1.png (1500x1000, 70K)

Other urls found in this thread:

youtube.com/watch?v=qU9ID_tpqX8
youtube.com/watch?v=h_ergKX1VJc
embeddedsw.net/OpenPuff_Steganography_Home.html
twitter.com/NSFWRedditGif

The protocol has underwent a few major changes since I made this video, but here's an earlier test render with a real 4.4MB payload:

youtube.com/watch?v=qU9ID_tpqX8

And here is what the program looks like if it's running. The video doesn't show the function, but rather everything it prints out, so you have an idea what it's doing.

youtube.com/watch?v=h_ergKX1VJc

Attached: logo_2.png (1564x322, 3K)

Here are a few example custom palettes.

Attached: paletteShowcase2.png (1060x160, 19K)

As for default palettes that come included, it starts with 1 bit, much like barcodes you'd see or QR codes. There's also 4, 8, 16, 64, and ~16.7M color palettes as well which offer increasingly higher data density, the last of which being dynamically generated as its needed.

This will be my last post until or unless someone has a question. I just figured some of you might find this cool.

Attached: 1.png (300x300, 1K)

I remember your last post. It's an interesting project! I've got a few questions.
Have you run any tests on how much redundancy / resolution / color precision you need in a jpeg before it starts to corrupt?
Are there duplicated copies of information / are these copies generally next to each other or are they spread out?
What's the compression / decompression ratio you're getting on these images?

>watching a 30 second video for a 4.4 mb file instead of just downloading it in a fraction of a second
sounds useful dude

Is this meaningfully different from a wrapper of png based data interpretation? (this seems to be covered briefly in your OP) I think people used to call them snowcrashes; there was also a Jow Forums project a while ago that was embedding data in videos with people using youtube as a file host.

Hi there. I've done only some preliminary tests for now, as I haven't (yet) had any automated ways to run the tests. I feel like I'm giving a politician answer here, but it really all depends on the configuration you used, and how rough of compression it gets subjected to afterwards. Color bitlength plays a factor, FPS plays a factor, as does block size. With some of my earlier manual tests it looked like I was able to make 200KB/s work on Youtube. Those parameters are actually the default arguments for write so people at least have a working starting point.

Right now, the information is only saved once. Here's the latest iteration of headers the frames use. There's a series of conditions that trigger an "emergency stop" on the frame (or stream in general). Only once all of those checkpoints are passed, is information validated and passed through. Repeat frames and other measures that barcodes like QR codes utilize are something I'm definitely interested in adding once this gets released to everyone. Aside from just stream integrity, the functionality would be needed for streaming these as well, so people can join midstream and still collect pieces not unlike a torrent client, etc. I hope that answers your question.

Attached: 20190214_001220[1].jpg (2000x1125, 503K)

I remember you
Probably the most Dunning-Kruger thing I've ever seen on Jow Forums, you put so much effort into something that's such a simple and useless concept acting like you've made some sort of discovery

this

Imagine seeing only the negative parts of life. Must be hell.

Well I wouldn't be seeing it if you didn't post the thread, but here it is

Hi there. It's not meant to be efficient, its theoretical advantage is in portability of data. I'm newish to programming and this was my first major project, so even if no one cares about this, it was enjoyable to make and I learned a lot about design and architecture.

Sure, so to repeat what I said in OP so I can cleanly branch off from it, instead of the binary information of the file itself being the carrier of the information (like what you said), this encodes it in the actual colors on the screen. There are several benefits to this. For one, it's resistant to corruption. What you mentioned will break unless the image is of lossless original quality. I offer different palettes that give you different levels of tolerance to corruption. The read algo will "round" colors to the nearest color (refer to image), so you can see that the more colors you have, the tighter of tolerances you have. With that said though, my project can still make what you said. Without diving into the technical stuff of it too much here, 1 pixel block width using 24 bit color is approaching 1:1 efficiency in transmission, the only things preventing that is meta-information in the image file itself, and the frame headers I have to ensure file integrity and all of that.

You're free to think that. Like I said, this was a learning experience first, and whatever happens after it releases isn't up to either of us. I have a small group of people regularly asking about updates, so let's see what happens.

Attached: gNMti.jpg (774x236, 24K)

I wouldn't flame if you if you just called it a learning experience but you're pretending this is a serious project of actual worth when it has none

Why the hell did you lock the docs?
>please log into google, goy

Why not use something less gay?
Check out this: embeddedsw.net/OpenPuff_Steganography_Home.html
Now use both the pics I'm fixing to post and the password (A) as PENISPENIS

Attached: fc-list.jpg (670x400, 284K)

2nd pic

Attached: halt.jpg (670x400, 220K)

Now what happens when you upload them to a websites that reduces the quality of those images?
Did you even read the OP?

It's currently being worked on. That will be a higher priority when read functionality is complete.

Yep, but as I said above, that's relying on 100% integrity of the file, since the bytes are the data carrier. One of the main benefits of BitGlitter is that you can distort the image, resize it, watch the video (or download it) in a different resolution, and there are built-in tolerances to still extract the data.

>doesn't work on video files
>significantly less efficient
>isn't immune to compression
Gay

They're two different use cases, steganography is about hiding that there's a message and OP's is like a more elaborate b64 encoding.

it can hide video files tho and it hides them in a way that isn't noticeable so it's way better, even if it's less efficient.

oh I don't understand the need for that, seems useless

steganography has a use, you can hide watermarks in images so you know where they came from
unlike coding data into youtube videos
did the OP bring in his entourage or something

>this encodes it in the actual colors on the screen
I see, so it's not that these 3 bytes make this colour, it's that these bytes will represented by this colour / pattern. The resilience and configurability are the main advantages here (as compared to similar existing projects).

This would probably be most useful as an extension for web browsers; imagine a menu to pick and image or a file to make into an image etc.

I hope it's been a fun project to work on (assuming that you're doing this for fun). It's a great feeling to make something interesting and useful (even if others don't understand the utility).

It's for transmitting files (read: arbitrary data) through channels that only support images / videos. You also gain the benefits of all the consumer gadgets / infrastructure built around those media types.

I believe he was questioning the usefulness of OP's project, not stenography.

From what I understand that URL, you're still making bytes the data carrier. Put another way, you'll need to give someone the original, uncompressed, uncorrupted file. BitGlitter, with the right configs, can take serious losses in quality and still be able to be read. Social media sites for instance compress the heck out of files there to save space (Facebook, Youtube, IG, etc). This can survive that.

but so couldn't a zip file?

>even if others don't understand the utility
the only utility this has is outweighed by the utility of a QR code

QR codes have far less entropy unless you want to get silly.

I'm not reading OP's posts because they're too long but does this let me share CP by printing images and leaving them on public places?

>and should be released as an open-source project
Miss me with that, faggot. I hate the OSM.

>QR codes have far less entropy
yes, that's why they're better

Yes, but not across mediums that only allow multimedia.

That's correct. But to reiterate a little so my point becomes clear, at the highest level of compression, it works out as the bytes becoming the carrier of the information. This is only applicable with 1 pixel with 24 bit color, due to how pixel information is stored in PNG files. Making it into a web-app is one direction I'd like to take it in, as well as a simple desktop app using something like Electron so people beyond programmers can utilize it. I've semi-limited what my 1.0 release has because its a lot of stuff to figure out for one person, let alone for someone new to programming, and I'd like to get this out ASAP. There's that, and I think its a good enough "prototype" where it will garner some interest from the open source community. There's already a few people I've been in contact with who want to contribute, and this is just from a single thread on Jow Forums, so I really don't care for someone trying to convince me that it's useless. Thanks for the kind words.

>Yes, but not across mediums that only allow multimedia.
seems like you'd be breaking the terms of service doing this, better to just upload files where you're supposed to

I'm not breaking the ToS of anything by releasing a script. End users of my project should follow the applicable rules for whatever services they wish to use it on. I don't condone troublemaking.

>breaking the terms of service
False. Even YouTube would allow this. But that doesn't matter because yt isn't the only video host.

the amount of code in your repo could be rewritten in about 200 lines of c++/cvimage.

>electron
please no

and what if the sites do color space conversions or reduce bit depth?

YouTube doesn't allow it, they've already shut down similar schemes and if yours catches on they'll ban yours too
Still fucking pointless seeing we don't live in a world where you have to share files through videos, file hosting services exist

Point me to the page in their TOS that says they're against this.

look up the part that says they can delete any video for any reason because it's their servers.

>they've already shut down similar schemes
which ones?

this you're basically hosting files on their server against their permission. People have obviously already done this, and if you do it it gets deleted

Fucking brainlet you can hide information in the frequency domain of the image such that compression won't touch it.

I didn't post the thread, retard.

well its kind of hard to find them because they've been deleted, like I said

It seems like a nice cross-platform choice for minimal work. Do you recommend something else?

I'm a little doubtful of that being that it's a higher level language and there is a lot of little components that make this up, but I'd still like to port this over to C++ at some point, or even more some stuff over to Cython. Relatively speaking this program is "slow" because a lot of the heavy math and repetitive functions are done with pure python, and on a single synchronous thread. This can be optimized like crazy, but I'm going to release this first so I can start accepting help from people. Even though its only a few thousand lines of code so far, I've put hundreds of hours into designing this. Every small internal movement was deliberately planned to seamlessly fit together with everything else.

You simply use a lower bitlength palette and you're fine. Sorry I don't have a larger image to show you, but this is a demo output from the default palettes, excluding 24 bit. You can make the streams as simple or compressed as you want them to be. The write function alone has like ~20 parameters you can customize.

Attached: default palettes.png (694x134, 42K)

>Do you recommend something else?
Qt
>math and repetitive functions are done with pure python
Jesus fuck

I've heard good things about PySide, especially consider their licensing vs PyQt. And yeah... this was (and is) my first large solo project. Its a prototype more than anything. It's hard to choose what to do and what not to do when you're being pulled in 30 different directions with how to make a project well-rounded to people. I have a lot more respect for software developers after having made this.

This is really cool, great work OP.

Thank you.