Let's try again yesterday's thread ()
Post some of Jow Forums's projects that you think are neat, but no one really cares about.
Let's try again yesterday's thread (>>67219256)
Other urls found in this thread:
github.com
github.com
youtube.com
github.com
gitea.io
github.com
coding.net
github.com
github.com
twitter.com
>g-guys my thread failed give me more (you)
no
I wrote this when I had to download very specific pages from websites for a client's project, but the only thing that "worked" for me was a buggy shit written in Java
We have /dpt/ and /wdg/ for posting projects.
Please FUCK OFF.
The project might not be something you're actively programming (/dpt/) or web-related (/wdg/). This thread could be for projects that are mostly complete.
Yes, what a waste of space. Imagine, we could've had yet another Intel/AMD/Nvidia shitposting thread instead.
fuck you go back to your headphones thread
Reposting from yesterdays thread, because why not?
>I wrote a bash script to create webms within a certain file size limit via ffmpeg, no prior experience or additional user input required. Video bitrate, audio bitrate, resolution, etc. all gets set based on the file size limit and video length.
github.com
No Mr FBI man I will not dox myself with links to code
This youtube.com
Can't think of anything but Jow Forums to make this
Nice proj. Checked.
You don't need to be actively programming it to post it in either.
Please cut out the shit excuses.
kys cunt.
I pulled an onion routing protocol out of thin air in 6 months somehow and it is almost in a working state.
but no one really cares about it yet, thank fucking god.
Try Gitea
gitea.io
>THIS IS A TEST INSTANCE ONLY! REPOSITORIES CAN BE DELETED AT ANY TIME!
Hm.
holy quads
Now this is amazing.
github.com
Tools for a niche technical documentation specification called S1000D. Something I am developing for and use every day at my job, but there's very little open discussion of S1000D on the web, so I don't know if there's an audience (aside from myself) for FOSS, command-line tools like these.
What's the monkey supposed to represent?
I wrote the Zsync (similar to rsync, arguably better) algorithm in python and I'm adding support for asyncio and aiofiles so the process uses asynchronous IO and won't block when reading from a file. Google pyzsync if you want to check it out.
bump
>Jow Forums - Technology
>only two threads for programmers allowed
i don't remember that in hondura's flag
Nice
Would you guys be interested in a program which converts any file to an image?
Are you the guy who wrote github.com
Yeah user, That's me. I'm glad someone at least saved the link.
What do you think of it? I did a video version of it, so that I can use youtube for storing data. But it takes relatively long time to encode the data
What error correction does it do? Like if I converted the png to a jpg quality 50.
It's gone to shit when you convert it to jpeg. I suppose it's possible but you'll sacrificing data capacity
so that's a libcurl-based wget-like tool?
cool
>but the only thing that "worked" for me was a buggy shit written in Java
what do you mean?
what is it supposed to be doing?
I did
./imager.py -f 1534795953032s.jpg -o 1534795953032s.txt -d
./imager.py -f 1534795953032s.txt -o 1534795953032s.2.jpg
and I got picture related
shouldn't it be the same as OP pic?
also change shebang to "#!/usr/bin/env python3"
only in Arch is everything accessible in /bin (as everything symlinks to /usr/bin)
Don't use the decode on the first. You're encoding it. Also output should be a png file, input can be any file. After that use -d you should get the same file.
I have no idea what to use it for, but it's still fun to play around with (both the image and video version).
./imager.py -f 1534795953032s.jpg -o 1534795953032s.png
./imager.py -f 1534795953032s.png -o 1534795953032s.jpg -d
indeed returns the exact same file
nice
maybe add something like that on README
Thanks user.
Yeah, the README is lacking. I'll add.
crashes for me on saving the file unfortunatley
What was the file size?
unrelated to filesize. the PIL/Image.py line 1662, in save format = EXTENSION[ext]
But python was also in a different location from the shebang in your script so I changed that there.
Oh, Okay. I will change that.
It crashes when the filesize is too big, so I thought maybe that.
Programmed a delta copy tool for super big files >2TB to create backups in an efficient way. Didn't find anything similar.
github.com/Cloudnaut/HFFDC
Isn't that exactly what rsync-backup is used for? Not to be a dick I'm just curious
As far as I know rsync isn't really working for my use case, since it always checksums the target destination fresh. Therefore it has to read the whole file from the flash drive. This took ages
/dpt/ threads are always trap threads with trashtalk
Oh that makes sense, thanks. So you avoid this by assuming the integrity of the destination and copying only the delta?
I wrote this way back when pywal wasn't a thing, then I adapted it to work with pywal, it's called wpgtk and it's a "ricing assistant" helping you generate colorschemes from images.
this colors are replaced in multiple configuration files with the help of templates, which you can easily create and edit.
Yes, you checksum the blocks of the file locally on your device which shouldn't take that long since the read rate for internal drives is in comparison really fast.
Afterwards you copy the file with its checksum file to your external drive or flash drive. Initial copy takes long of course.
After you changed your file partially, you recreate the checksum with the same blocksize for the local file. In my case these big files are mainly HDD-Files of virtual machines.
When you initiate a copy only the checksums for the blocks are compared. Differing block-checksums are indicating changes.
Data integrity checks are planned for future. But those would take long because you have to recheck the whole file.