/dpt/ - Daily Programming Thread

What are you working on, Jow Forums?

Attached: 1509660085219.jpg (800x720, 125K)

Other urls found in this thread:

donotturnoff.net/projects/digital/lorenz
hackerrank.com/challenges/reduced-string/problem
en.wikipedia.org/wiki/Diceware
twitter.com/AnonBabble

Doing a voice driven chess game for a Senior group project. So far I've got the voice recording and parsing working, and no one has done anything else.

In Python, I can do map(fun, arr) to do fun(arr[i]) for each element.
Is there something similar for arr[i].fun()?

map(lambda x: x.fun(), arr)

Is this better than using methodcaller?

Just use generators or list comprehensions:
[x.fun() for x in arr]
(x.fun() for x in arr)

An Lorenz system model using the rendering engine I built in Java. It's at donotturnoff.net/projects/digital/lorenz

map(type(arr[0]).fun, arr) also works.
Especially since most of the times you know what type(arr[0]) is beforehand.

*An interactive

I'd say it's about the same, but using the lambda is more clear imo

Pascal compiler targetting LLVM

Forgot pic (I've updated it slightly since this screenshot, but not much)

Attached: lorenz_demo.png (799x798, 132K)

doesn't work if the list contains elements of different type

>java

Nothing because I can never think of projects to work on :^]

Yeah?

>non-anti-aliased lines
That's a 'yikes' from me.

I fucking love how non-anti-aliased lines look

I made my own rasterizer, it's still in progress, and frankly, anti-aliasing isn't my main priority at the moment, I have other loose ends to tie up. I'll get round to it though, thanks for the feedback

Gonna start a .NET project because it's the only thing people are hiring for here and I need a jerb.

are you a poo?

Is the built-in one too slow or is it some other problem? Your curves seem to have odd drawing artifacts in them like they're being converted into too few straight lines.

Will Topre keyboard make me a programming god

post some leetcode so I can laugh at it

No, not really. I made a framework for handling 3D objects because I was interested and wanted to learn more about 3D graphics, so I built that (well, it's still in progress, hence the non-anti-aliased lines and other glitches). Then I thought it'd be fun to make a Lorenz system modeller, and figured it made sense to put my 3d framework to good use, so that's why it's not perfect.
The reason they look like they're being converted into too few straight lines is probably because they are. I was going to add a control for how many lines to draw, but for the moment what I have now seemed a decent trade-off between speed and accuracy (given that my rasterizer could do with being sped up a little). So that's why it looks a little off.

JavaScript rocks!

Attached: js-rocks.png (1000x494, 369K)

kill yourself already

I want the easy conversion from pseudocode to actual code that Python gives but would like static typing. What language should I use?

so whats the preferred ide for python? for c++ i use qtcreator and id like something similar, no meme editors like vim.

Attached: 045642808.jpg (1600x1200, 220K)

-- hackerrank.com/challenges/reduced-string/problem

import Data.List

reduceStr :: String -> String
reduceStr = concat . map (\x -> take (length x `mod` 2) x) . group

-- Recursively
reduceStrRec :: String -> String
reduceStrRec str
| reduced == nextReduced = reduced
| otherwise = reduceStrRec nextReduced
where reduced = reduceStr str
nextReduced = reduceStr reduced

main :: IO ()
main = getLine >>= putStrLn . format . reduceStrRec
where format str = if null str then "Empty String" else str

Attached: 1517698532761.png (1500x2102, 2.63M)

emacs

(here you go senpai)

nigga, i said no meme editors

Attached: 1481742059881.gif (500x338, 3.35M)

rewrite reduceStr using do notation

?

What's a fun 200+ LOC program I can write in C++ that'll trick someone into thinking I'm worth hiring for a job?

Attached: 1540127321711.jpg (1500x1874, 165K)

There's literary nothing wrong with using C#, provided you don't use Visual Studio

A qt frontend for dd

its a fucking notepad.exe, i wanted a full ide as in project tree, coloring, intelisense and other shit

Attached: u pretending or what.jpg (667x556, 46K)

>You're probably best off making some shit that just uses like binary trees, hash tables etc in some way that does some boring shit
gd advice or na

you can have all of this and more in vim

yeah? how about usable mouse? just fucking dont replay to me retard

you could try sublime if you don't like vim, but either way you need to install plugins/packages to get that IDE level of function. if you need everything immediately out of the box, use VSC or pay money

You should take a look at OpenGL. It even has a mode, GL_LINE_STRIP, where you send it a bunch of sampled points and it will render the lines connecting them. It takes a little set up, but it's basically guaranteed to be faster and better looking than anything you can do just on the CPU.

I'm doing a NN that will restore color to a black and white picture. My training set is a bunch of portraits of WW2 generals from the game called HOI4. Initially the net was a bunch of 3x3 convolutions like in waifu2x, and it was somewhat ok, but it coudn't distinguish uniform color - for pic related it used dark blue uniform color, because that's what Germans wore and there were more of them than anyone else. I added some classification layers and plugged the result of classification together with input into my convolutions, and now it works well with uniforms. I'm going to train a de-noise network also because a lot of B-W pictures I find online are very noisy.

Attached: Untitled-1.png (721x275, 145K)

im gonna learn js now thanks

I'm proud of myself for writing this all by myself
splitOn :: Char -> String -> [String]
splitOn _ [] = []
splitOn c (x:xs)
| x == c = splitOn c xs
| otherwise = takeWhile (/= c) (x:xs) : (splitOn c $ dropWhile (/= c) (x:xs))

filtered

pycharm

// it's important that the LUT is initalized after the lists are intialized or the lists will still be null
static Dictionary LUT;
static EnumPopulator()
{
LUT = new Dictionary()
{
// VOID glBeginConditionalRender(UINT32 id, ENUM mode)
{ "glBeginConditionalRender -> mode", new NLC(nameof(ConditionalRenderMode), ConditionalRenderMode) },

// VOID glBeginQuery(ENUM target, UINT32 id)
{ "glBeginQuery -> target", new NLC(nameof(QueryTarget), QueryTarget) },

// VOID glBeginQueryIndexed(ENUM target, UINT32 index, UINT32 id)
{ "glBeginQueryIndexed -> target", new NLC(nameof(QueryIndexedTarget), QueryIndexedTarget) },

// VOID glBeginTransformFeedback(ENUM primitiveMode)
{ "glBeginTransformFeedback -> primitiveMode", new NLC(nameof(TransformFeedbackPrimitiveMode), TransformFeedbackPrimitiveMode) },

// VOID glBindBuffer(ENUM target, UINT32 buffer)
{ "glBindBuffer -> target", new NLC(nameof(BufferTarget), BufferTarget) },

// VOID glBindBufferBase(ENUM target, UINT32 index, UINT32 buffer)
{ "glBindBufferBase -> target", new NLC(nameof(BufferBaseTarget), BufferBaseTarget) },

// VOID glBindBufferRange(ENUM target, UINT32 index, UINT32 buffer, INTPTR offset, UINTPTR size)
{ "glBindBufferRange -> target", new NLC(nameof(BufferBaseTarget), BufferBaseTarget) },

// VOID glBindBuffersBase(ENUM target, UINT32 first, UINT32 count, const UINT32* buffers)
{ "glBindBuffersBase -> target", new NLC(nameof(BufferBaseTarget), BufferBaseTarget) },

// VOID glBindBuffersRange(ENUM target, UINT32 first, UINT32 count, const UINT32* buffers, const INTPTR* offsets, const UINTPTR* sizes)
{ "glBindBuffersRange -> target", new NLC(nameof(BufferBaseTarget), BufferBaseTarget) },

// VOID glBindFramebuffer(ENUM target, UINT32 framebuffer)
{ "glBindFramebuffer -> target", new NLC(nameof(FrameBufferTarget), FrameBufferTarget) },


...and it goes on for 800 lines
Couldn't find a better way to do this

Yeah, I'm planning on some more 3D projects in the future and I probably will use OpenGL (a lot of the information I used to make my engine was gleaned from OpenGL tutorials so I know a bit about it), but I only really made my rasterizer because I was interested and wanted the challenge. I'll definitely look into OpenGL more though, thanks :-)

Unnecessarily monomorphic. Generalize for all Eq.

I want to make thing which translates public keys into something human readable. I'm thinking I should create list like here en.wikipedia.org/wiki/Diceware but longer, which include sequences with letters.

inputs = Input(shape=(inputChannels,height,width))

classify = inputs
pooling_layer_sizes = [8, 8, 16, 16, 32]
pooling_layers = []
multiplier = 1
for depth in pooling_layer_sizes:
classify = Convolution2D(depth, (5, 5), padding='same', activation=act)(classify)
classify = MaxPooling2D(pool_size=(2, 2), padding='same')(classify)
classify = BatchNormalization()(classify)

multiplier = multiplier * 2

upscaled = UpSampling2D(size=(multiplier, multiplier), interpolation='bilinear')(classify)
shape=upscaled.get_shape()
upscaled = Cropping2D(cropping=((0, int(shape[2])-height), (0, int(shape[3])-width)))(upscaled)


classify = GlobalMaxPooling2D()(classify)
classify = Dense(64, activation=act)(classify)
classify = BatchNormalization()(classify)
classify = Dense(64, activation=act)(classify)
classify = BatchNormalization()(classify)
classify = Dense(8, activation='tanh')(classify)
classify = Reshape((8, 1, 1))(classify)
classify = UpSampling2D(size=(height, width))(classify)

layer = Concatenate(axis=1)([classify, upscaled, inputs])

layer = Convolution2D(5, (3, 3), padding='same', activation=act)(layer)
layer = BatchNormalization()(layer)
layer = Convolution2D(8, (3, 3), padding='same', activation=act)(layer)
layer = BatchNormalization()(layer)
layer = Convolution2D(12, (3, 3), padding='same', activation=act)(layer)
layer = BatchNormalization()(layer)
layer = Convolution2D(16, (3, 3), padding='same', activation=act)(layer)
layer = BatchNormalization()(layer)
layer = Convolution2D(24, (3, 3), padding='same', activation=act)(layer)
layer = BatchNormalization()(layer)
layer = Convolution2D(24, (3, 3), padding='same', activation=act)(layer)
layer = BatchNormalization()(layer)
layer = Convolution2D(32, (3, 3), padding='same', activation=act)(layer)
layer = BatchNormalization()(layer)

layer = Concatenate(axis=1)([layer, inputs])
output = Convolution2D(3, (3, 3), padding='same', activation=act)(layer)

What are "generics" in Java? I learn them tomorrow.
What am I in for?

>how about usable mouse
you can have it too, if you wish so
don't talk without knowing, it makes you look like a legitimate retard, friendo

How the everloving fuck does one even read this garbage.

desu the people who write int* ptr; should really typedef pointers like in win32

>Output isn't a DenseLayer
The fuck is this predicting?

but Rabin–Karp is older than KMT and KMT is older than Boyer–Moore

Well...

>it's basically guaranteed to be faster and better looking than anything you can do just on the CPU.
Go read the OpenGL Spec. Now. Go read what exactly it "guarantees". Don't use that LINE_* for anything.

just get a huge ass list of english words, possibly collapse together homonyms and homophones, then map blocks of the public key to it

template
using Id = T;

Id ptr;

it's pretty easy once you get the hang of it desu, what don't you get?

Generalized types and methods that can work for a selection of types, rather than a single concrete type.
If you know ArrayList and HashMap then you already have a passing familiarity with them.

isnt vim a console apllication? either way im downloading kdevelop because wiki says they have python + qt support out of the box, not sure whats gonna happen tho, wish me luck

Attached: 3423464500009067274.jpg (440x550, 229K)

Have you considered training it on wikimedia commons images?
Also, this has been done before. The results are shit.

I've got you bb
split :: (Eq a) => a -> [a] -> [[a]]
split _ [] = []
split c (x:xs)
| x == c = split c xs
| otherwise = takeWhile (/= c) (x:xs) : (split c $ dropWhile (/= c) (x:xs))

How fucking long time does it take to fit a PCA to a dataset with scikit?
It's been running for over 20 minutes, and it's a small one (3000 samples, 2000 features per sample)

In GVim it seems to work outside the box(just ran it, I think for the first time since installing the system, and the mouse works just like in any other editor).

As for the terminal: depends on the terminal, in mine the mouse works, though I barely ever use it. If I want to run something I will have to copy, I run it in neovim :term, and yank content.

Most importantly:

>editing text

>using a fucking mouse

Well, the thing is, there is some amount of mods for this game, and most of them use really bad looking portraits. I'm thinking that if I make it possible to automatically convert a bw picture from the web into a drawing-like image, styled to look like others in the game, I would be able to substantially improve the quality of existing mods. Plus it can be a good practice.

>template
>using Id = T;
>Id ptr;
surely, you mean
template
using Id = std::decay*;
template>
using Id = T;

I used to use PyCharm. Now I use neovim.

trying to learn which language I should go for.
I also am trying to get over a hurdle of procrastination.

Yeah, but then you need to convert the bw portrait to the bw drawing first.

No, Id is the identity function over types. So Id p, p2; works as you expect.

VSCode is the patrician choice

I can't explain why, but I like lisp

Wouldn't anti-aliasing change the graphical behavior of the system? After all it's still a filter

Based and redpilled.

I find intellisense REALLY slow. Like... I finish typing even my long variable names before it shows up. This goes for every language I tried.

Well, I looked at it previously, and for most portraits in the game I checked I was able to find the photo it was based on. I compared few of them to game portraits, and to the eye, most of differences were color, noise and a bit more lightness in photos. Of those things color looked the most difficult, so I decided to have a go at it first. I know that there is a lot of previous work related to this, but they all use well-known huge networks as base, and all I have is one GPU, so I decided to try and see if I can tackle this with a smaller network.

oof just made a better solution
split :: (Eq a) => a -> [a] -> [[a]]
split _ [] = []
split c (x:xs) = takeWhile (/= c) (x:xs) : (split c $ drop 1 $ dropWhile (/= c) (x:xs))

It needs configuration to bring that stuff into vim, though. Normies hate configuration. Fags are also scared of the terminal, and lethally allergic to modal editing.

not that the default vim helps much in that way, feedback on default/insert in vim is terrible, neovim has it much better done.

Are Racket and Lisp the same? What are the differences?

Is python better than Lisp?

Oh, and, by drawing, I actually mean the picture I posted. It's a kind of like a photo-realistic drawing.

On the picture is, drawing on the left, actual photo of the right.

This guy, turns out, didn't take part in WW2. I don't know thy Paradox included him.

Attached: Untitled-2.png (721x275, 120K)

Lisp is a family of language, python is a single language. Python is widely used in industry, lisp languages are mostly meme.

Why

What do you mean?

You're right, Python is better than all Lisps.

Lisp is a family of languages. The most iconic thing about Lisps is the syntax, which almost exclusively consists of nested parentheses that form s-expressions.
The thing about Lisps is that it's pretty easy to make one because of the minimal syntax. So even implementations of standardized Lisps such as Scheme or Common Lisp tend to add their own unique constructs that make them distinct dialects.
Racket is a dialect of Scheme. Actually, it's a couple different dialects in a single package, with optional distinctions that you can turn on or off with compile flags e.g. static typing.

I didn't say that.

dunno. It looks nice I guess

Python is just so damn comfy. I'm glad I learned C++ first though, as I can appreciate the niceties provided by Python. Currently waiting for 3.7.1 to make

Racket is a Lisp.
Lisp is a family of languages, including Scheme, Common Lisp, Racket, Emacs Lisp.
Although, Lisp is another name for Common Lisp.
Racket is a a superset of Scheme. Racket is also not just a single language but a bunch of languages together, bound by some common way to use each other easily.

Working out how to combine a type theory and a programming language in such a way that you can assert that a type/function in the latter implements a type/function in the former and use that for formal verification.

For example, say you have this code in the type theory:
data List : Type -> Type where
nil : List A
cons : A -> List A -> List A

concat : List A -> List A -> List A
concat nil ys = ys
concat (cons x xs) ys = cons x (concat xs ys)

And this code in the programming language (I'll just steal Rust's syntax here):
struct Node {
element: T,
next: *mut Node
}

struct List {
head: *mut Node,
tail: *mut Node
}

fn concat(xs: List, ys: List) -> List {
if xs.head == std::ptr::null_mut() {
ys
} else {
xs.tail.next = ys.head;
List {
head: xs.head,
tail: ys.tail
}
}
}

There should be some way of performing induction on a type theoretic list to define what a valid linked list that implements it looks like (i.e. that T implements A and that the pointers are set so that the elements match up), as well as proving that both versions of "concat" do the same thing.

Currently I'm thinking that giving the programming language refinement types over terms of the type theory makes sense. But that's not the whole story, since there needs to be a way to define how List implements List A parameterized over an implementation T for A. Maybe I could work that into traits/type classes?

Windows API question, what's the correct way of intercepting mouse movement and modifying it?
I want to basically smooth mouse movement and do more stuff with it, but I can't modify the position from the mouse hook, and setting it using another function makes clicking go crazy

How long time does it take to fit a PCA? 2048 dimensions, 3000 samples. It's been running for half an hour so far.

Attached: 1530211434801.png (544x35, 2K)

But Lisp is not usable, therefore it is bad.
The parentheses? Yes I also like that aesthetic.
I also like that symbols are a thing.
No need for enums.

Unrelated, but why did Scheme choose to make nil and the empty list be different.
There's nothing to lose and more to gain by having nil be the empty list, and also its own car and cdr.

>I also like that symbols are a thing.
>No need for enums.
Could you explain this?

>Unrelated, but why did Scheme choose to make nil and the empty list be different.
uhh, when?
(null? (list))
=> #t