/dpt/ - Daily Programming Thread

Old thread: What are you working on, Jow Forums?

Attached: 1565029850659.png (850x1200, 938K)

Other urls found in this thread:

terminal.sexy/
youtube.com/watch?v=JOkQJm_UGM4
towardsdatascience.com/generating-pokemon-inspired-music-from-neural-networks-bc240014132
keras.io/examples/lstm_text_generation/
twitter.com/AnonBabble

kneading my dick

AAAAAAAAAAAAAAAAA

Attached: IMG_1513.png (216x261, 35K)

>What are you working on, Jow Forums?
Working on a x86 fuzzer that I decided to rewrite in asm. I've never dealt with assembly outside of reversing and exploiting, so it is nice to have freedom to use null characters and not have any size limitations.

%define mmap_size 0x3000

%define SA_SIGINFO 0x4
%define SIGSEGV 11

%define stdout 1

%define sys_exit 1
%define sys_write 4
%define mmap 90
%define munmap 91

%define MAP_SHARED 33
%define PROT_RW 3

section .text
global _start
extern sigaction

_hello_world:
mov edx, hello_world_str_len
mov ecx, hello_world_str
mov ebx, stdout
mov eax, sys_write
int 0x80
ret

_segv_handler:
push edi
push esi
push ebx

call _hello_world

mov esi, [esp+0x18]
mov dword [esi+0x4c], _safe_return ; set eip

pop ebx
pop esi
pop edi
ret

_start:
_mmap_memory:
push 0
push -1
push MAP_SHARED
push PROT_RW
push mmap_size
push 0
mov ebx, esp
mov eax, mmap
int 0x80
cmp eax, -1
je _end
mov [memory_region], eax

_segv_handler_setup:
mov dword [struct_sigaction], _segv_handler
mov dword [struct_sigaction+132], SA_SIGINFO
push 0
push struct_sigaction
push SIGSEGV
call sigaction
add esp, 0xC

_do_segfault:
mov ebx, [memory_region]
jmp ebx

_safe_return:
call _hello_world
nop

_munmap_memory:
mov ecx, mmap_size
mov ebx, [memory_region]
mov eax, munmap ; munmap
int 0x80

_end:
mov ebx, 0
mov eax, sys_exit ; sys_exit
int 0x80


section .data

hello_world_str db 'Hello, World!',0xa,0x0
hello_world_str_len equ $ - hello_world_str

section .bss

memory_region resb 0x4
instruction resb 0xff
struct_sigaction resb 140

Attached: scrot.png (3840x1080, 368K)

Why don't more languages have this?
#include

using namespace std;

string operator%(string a, string b) {
int index = a.find("{}", 0);
a.replace(index, b.length(), b);
return a;
}

int main(int argc, char ** argv) {
string a = "{} there, {}";
string b = "hi";
string c = "dude";
cout

>all that unnecessary runtime work
disgusting.

wat

use const

Attached: 1560258429996.webm (1280x720, 1.93M)

how does one go from being a java fag to a c++ fag? i'm interested in learning c++ any books you guys can recommend me?

Attached: 1564474423503.png (4000x2000, 2.17M)

because other languages have string interpolation instead, which is infinitely better.

there are 5 different kinds of C++, you had better decide which one you want to learn before you start.

change "dude" to "X" and check the output.

install gentoo

C++ is too big for one programmer to wrap their head around the entire language these days. What do you want to use sepples for? Focus on that.

Site to shotgun upload illegal lolicon albums to multiple hosts + hotlink albums to whichever one is working.
Looking for reasonable hosts with anonymous upload API, does user know about some, aside from imgur/imgbb?

inb4 why javascript and not Rust+emscripten

Attached: notepad++_2019-09-01_07-23-21.png (1041x936, 53K)

>illegal
glow in the dark spotted

Attached: 1565044347535.png (557x625, 343K)

>he's not using aloonix
Go away, microsoft shill.

Attached: aloonix-de2.png (801x600, 49K)

why javascript and not Rust+emscripten

nonce

what?

Attached: image.webm (1152x648, 794K)

Your lolicon is still illegal though. Also, the crackme tutorial you're using is dated, nobody cares about i386 in current->tm_year.

Memes aside? Self hatred obviously. Though was initially considering gopherjs, but turns out it shits out browser crashing blobs just like anything emscripten does.

Based. I too think it's massively comfy when I don't have to deal with C language bullshit

>that hand

quality anatomy knowledge there

Do you have your dotfiles somewhere?

no, is there something you want in particular? It's default bspwm and lemonbar pretty much with a terminal.sexy/ color scheme I thought looks alright.

>Looking for reasonable hosts with anonymous upload API, does user know about some, aside from imgur/imgbb?
Picassa (now that moot ruined it called photos.google.com) and yandex can be abused for hotlink img/video storage, they have * CORS headers too, so serverless uploads work. Porn streaming/gallery sites use those for massive archives of content they hotlink. However it still needs a fully botnet account to upload (phone number tied and all), so probably bad idea for public access where anyone can upload anything using your user session token.

In C++ this is just:
cout

>needlessly flushing the buffer
stop it

not the point

Why Rust+emscripten and not Rust+wasm32-unknown-unknown
unlike c++ you don't need heavy emscripten

>having a buffer

stop it

do you guys use intellij with java fx? does it work fine?
I'm using netbeans right now but I don't really like it

intel syntax is best syntax
mov eax, [trap_count]
inc eax
mov [trap_count], eax

Oh shit you can increment a immediate memory location? neat
inc dword [trap_count]
mov eax, [trap_count]

>finally think of project idea I could possibly even use daily
>still to insecure to start, thinking I won't know how to do it or finish it
another day wasted

I FUCKING LOVE THE HASKELL TIDDIE MONSTER

AI question here. let's say i have a generative adversarial network. could i have different 'knowledge databases' for it? Like, let's say I'm making an AI that paints pictures similiar to picasso using a database of his paintings. could i then use the same program and just select a different option and have it paint like rembrandt by switching the database to his paintings? would the AI have to be compiled again or is this possible to switch on-the-fly?

I FUCKING LOVE MILKERS

Threadly reminder:

"RAGIE WAGIE
STOMPS HIS FEET
RAGIE WAGIE
CANT BE NEET

IF HE DOES
HE WONT EAT
IF NO SLAVING
SLEEP IN STREET

RAGIE WAGIE
STOMPS HIS FEET
RAGIE WAGIE
SCARED OF NEET

WAGIE CRY AND WAGIE MOAN
WORKS HIS FINGERS TO THE BONE

WORK AND WORK NO TIME FOR FUN
ONLY EXIT IS A GUN"

asking again because i am certain that someone here knows lisp and i just need to catch them at the right time

can someone write a script in lisp/scheme that contains:
1) a list of numbers
2) a function that generates a random number, adds it to the list, and then overwrites its own script file with the new list and same function

Why do you want this, do it yourself

i don't know how, i'm not really a programmer. i've tried but i can't get it right. if you don't want to do it a pointer in the right direction would be appreciated

#!/usr/bin/guile \
--fresh-auto-compile
!#
(use-modules
(ice-9 match)
(ice-9 pretty-print)
(ice-9 format))
(define l (list))
(format #t "l is ~{~a ~}\n" l)
(define f (open-input-file (car (command-line))))
(define out (open-output-string))
(format
out
"#!/usr/bin/guile \\\n--fresh-auto-compile\n!#\n")
(do ((e (read f) (read f)))
((eof-object? e))
(pretty-print
(match e
(`(define l ,l)
`(define l
,(append
l
(list (random 10 (random-state-from-platform))))))
(_ e))
out))
(close-input-port f)
(format
(open-output-file (car (command-line)))
"~a"
(get-output-string out))

Look àt the ones in the old thread and Google each line for lisp

hey thanks a bunch man, there's no way i could have done that myself.

> would the AI have to be compiled again or is this possible to switch on-the-fly?
For on-the-fly, you'd have to train both picasso and rembrandt for the model to learn to generalize for both (so in reality, you'd have to train on *ton* of different painters).

Only after having sufficiently general model, you then stick picasso as autoencoder input (which biases the model towards its style), and it spits out something new resembling picasso - but also somewhat influenced by other painters because the model is general. Most notoriously you could get subject of the painting from one painter, drawn in style of another (and propensity for "domain mixing" behavior like this can be trained for too). However if you don't want these "outside influences", simply train separate models for each painter.

thank you for your detailed response. would it be common for an AI based device to have multiple different models like this? I don't know how much memory-wise it would take up so it could be run on a small device? I was thinking having one main model and being able to select different 'styles' of generated data would be more efficient. The data I'm talking about is likely going to be in MIDI or in text so not nearly as large and complex as pictures, for example, so having multiple models could be a simpler solution.

>The data I'm talking about is likely going to be in MIDI
With midi, there's usually no issue to throw at it everything you can get hands on, since restricting it to some "styles" severely reduces the corpus - which is already very small for midi (there are only couple million tracks at best). In music, even if the styles are vastly different, the underlying muscality is still all very much same and you'll struggle to accomplish something meaningful even in that regard with corpus so small. So yeah, compute the model once, and update it once in a while on server, and users download updated net (this is how modern ai mobile stuff does it already).

As for evaluating data through models, this is fairly cheap as long as vertices of any single layer fit in RAM (layer-by-layer can be paged from/to disk). So it works even for large models built from enormous corpus. A good example is google translate offline model, which counts just few hundred mb and takes few msec to evaluate for input. This "sparse" version is not what you initially get from training, the "end user" version goes through dimensionality pruning, which reduces accuracy by few %,but downsizes the latent space a hundred times by cutting off not very useful parts of it.

tl;dr: Training is very expensive, evaluation almost cheap. You can pretty much forget about training anything on the device itself.

good post

Attached: 1544187794469.jpg (706x1000, 935K)

thank you, you've been a great help

>tfw this is my final day of NEETdom

Attached: 1546806927420.png (1271x868, 787K)

just want to add, is such a large amount of samples really nescessary to get a 'coherent' output? (by human standards?) I'm thinking of quantizing and encoding certain musical chords or phrases in some sort of custom database using unicode characters and then using the model to create charachters instead of midi directly. it's a bit of a work around and in some cases it limits the creativity output and effect of neural network but i think it would be the way to go if i want to be pragmatic about my project

holy shit user me too. i start tomorrow. weird

what if we're starting at the same place haha

Attached: sweat.png (604x613, 209K)

what if you're the same person? how come all your posts are more than a minute a part?

you know what makes this even weirder is that they told me there was another candidate that they really liked for the position and they had a hard time choosing between us

maybe we both got the job and we're going to meet tomorrow

I dont wanna go to work, still at least 5 months to go until I get enough money to pay back this shitty company and get a new job, this first month seemed like an eternity

>until I get enough money to pay back this shitty company
Is your line of work about pleasing old men or something?

dpttxt pls

...

youtube.com/watch?v=JOkQJm_UGM4
STOMP STOMP CLAP
STOMP STOMP CLAP

i hate coding

I have written a rpn calculator.

Attached: 1566307977491.gif (360x360, 1.96M)

good job chino

Any prerequisites before starting this? I only know precalculus and college algebra.

Attached: y.jpg (1024x1454, 820K)

>Any prerequisites before starting this?
being a huge fucking nerd

Attached: 1567053788490.png (300x300, 320K)

well done my cute daughter chino

I said I would pay 5eur to a working virtual doorbell for my house, the world is too big to find a porn star

Attached: 14740.jpg (900x600, 94K)

Common on give me an inbox

Attached: _107156329_p07bp80s.jpg (1024x576, 176K)

desu

oh no no no

kagari slideshow bootloader

Attached: 2019-08-24_17-38-52.png (652x596, 67K)

webm rite fucken n o w

>x86 SMP initialization

Attached: 0dabcad4f1057a0717fb1dff498515ac.jpg (500x349, 28K)

>unicode characters and then using the model to create charachters instead of midi directly
It's a good idea to filter input into a reasonable vector space first, and do the same conversion on output. Just split MIDI into sequence of discrete elements. That is, get rid of note on/note off distinction, but convert to single "press" consisting of Instrument,Pitch,Duration,oNvelocity,ofFvelocity for each such pair.

Once converted to this format, - you can use simple text representation of "IPDNF", you can already throw it at LSTM-RNN - even the stock example one by karpathy. Result is a thing called a discriminator, a music critic which just says whether input is music or not.

Then you train a generator net, which can be weakly biased with *few* of the pieces of music you'd like to hear or entirely random if you want a more general result. Finally, you evolve both generator and discriminator. Few hundred thousands convolutions later - you have a GAN. The reason why you need so many samples is to get a sensible "music critic", otherwise things will just stay hazy and fail to converge to a level which can be useful.

All of the above is implemented in towardsdatascience.com/generating-pokemon-inspired-music-from-neural-networks-bc240014132

As for the amount of samples, you can use few, but it wont sound very good (still great for toy implementations though).

i picked up the art of electronics and i've been working through that

you're a fucking legend. thanks again it seems i'm on the right track with everything. I was running this exact example earlier keras.io/examples/lstm_text_generation/ which is from what I understand exactly what you are saying. the pokemon article you posted is a gonsend also.

Attached: 1529450669760.jpg (626x615, 47K)

Then I guess I'm ready.

Attached: 1563645939486.jpg (747x751, 110K)

Nerdy frogposter

Attached: cirno.webm (320x180, 759K)

IMO there are several philosophies for solving architectural problems
1: not giving a shit and just using whatever has worked for you in the past, even if it produces a solution that's ugly and not optimal or expandable
2: trying to reduce the problem to its simplest essence and then producing a straightforward solution, even if it's not the most efficient or boilerplate-free solution possible
3: trying to find the most creative and expressive solution with zero boilerplate, but at the same time making the solution more complex to understand and possibly harder to extend later
bonus: trying to do 2 but not really understanding what the fuck you're doing and ending up with a bunch of obscure boilerplate (see inheritance, abundance of OOP patterns, etc - these are mostly enabled by bad languages)

Any others?

Attached: f9c8381ff2b3c7ef856843ec53bb557fe29c29e5.png (640x480, 264K)

>knuth is so autistic his book opens with a flowchart program of how to read it with branches depending on how tired you are

Attached: 1565694528008.jpg (1005x794, 162K)

The article I liked implemented doesn't interface LSTM to GAN particularly well (though it's a decent base to build from esp with small sample count) because it uses CNN for generator, and LSTM for discriminator. While this is the simplest convolution pairing one can do, state of the art works the opposite - LSTM is (evolving) generator, and CNN seeded with very large datasets discriminates in distinct feature classes.

This needs one to design specific feature-class extracting autoencoders so that noise isn't flat, but henpecked into feature domains (think tempo, chords, progressions - all randomized separately instead of lumping together into one noise). The same authors wrote on the topic of such complex network, I think it was used for market prediction.

Why do you think you can reduce all architectural decisions to a few sentences? It has infinite complexity

You're right. It's impossible to make generalizations of anything ever.

My latest project is to integrate google vision api to read ocr.

There is no one philsophy for solving architectural problems. There's an infinite amount of them and they're all applicable somewhere. There's basic statements you can make like "don't optimize prematurely" or "don't add unnecessary complexity" but it's too complex to reduce down and say here's a discrete set of philsophies you can choose from. The only thing that's ever true is to choose the right one for the job

knuth is a cutie, no bully!

>don't optimize prematurely
This is hardly any different from the ones in that post, they were simply one step more generalized. But feel free to be autistic about this, I'm not gonna debate you.

I'm not just being pedantic, I think it's important to understand if you're in software architecture that you can't generalize it, that every problem has a unique solution which you can't reduce down to do nothing / simple / complex or anything else. It's way more complex than that and if you understand it you'll be a better programmer. But yeah, even "don't optimize prematurely" is wrong some of the time, there's no real 100% truisms for anything

>Why do you think you can reduce all architectural decisions to a few sentences? It has infinite complexity
All computation can reduced to a Turing machine or lambda calculus though.

Attached: 1566315229974.jpg (1280x720, 99K)

architecture isn't computation

It's an interesting question. At the level of a Turing machine are architecture and computation synonymous? Or is the state automata considered separate from the machine itself? One issue with that is programs can emulate other programs inside virtual machines ad infinitum, so at some point it breaks down.

architecture is a human discipline, it's not a science and has nothing to do with equivalent computations. It's about how you organize your code

>falling for the church turing thesis

Thoughts on java spring + hibernate + angular?
I think I may have found a job which uses these 3...

If the organization is arbitrary and only for human readability then the best organization would be one that self-modifies using a genetic algorithm into something fast, rendering everyone unemployed.

Attached: 1566878626430.png (500x550, 268K)

what does that even mean? A compiler turns your code into something fast, but the source code still needs to be readable for humans

what if we're already unemployed?