/dpt/ - Daily Programming Thread

What are you working on, Jow Forums?

Previous thread:

Attached: 1511879209867.jpg (1280x720, 166K)

Other urls found in this thread:

youtu.be/YQs6IC-vgmo
github.com/KevinParnell/OneeChan
pharo.org/
nim-lang.org/
nim-lang.org/docs/tut2.html#object-oriented-programming
nim-lang.org/docs/tut2.html#templates
twitter.com/AnonBabble

nothing because i'm worthless neet

Attached: karen haskell.png (1280x719, 818K)

this

bost sum code guise

Attached: scrot.png (878x721, 122K)

A CS-teacher once claimed that downloading two images from the same server simulatenously is faster than doing it sequantially. How does that make any sense?

Are those boys?

Reminder that roughly half of the anti-FP posters here think cons-ing a list takes O(N) time in FP.

Nvm while you were all busy niggerposting I figured it out.

that book sucks desu

haven't read it

Attached: 1537985908896.png (1075x1518, 1.76M)

stop posting dudes with boobs

How can I make a game? Should I try using Lisp or Haskell, I've heard that was good for this types of things

Here you go user, going through some SICP

Attached: horner.png (665x345, 36K)

You need to set up a game development environment up first, eg. Gentoo Linux

What do you think I'm posing from, this is a fresh Gentoo install

>tfw mixing variadic templates with inline asm
Jesus Christ why did I ever think this was a good idea.

Post what it looks like lmoa

In all seriousness, Gentoo is probably the worst distro to develop games on due to the incredibly custom configs you could do. A stock arch or debian image is probably better.

I'm trying to plot a hyperbola in Matlab. I'm using the parametric equations:
>x = a*cosh(t)
>y = b*sinh(t)
>t goes from 0 to 2*pi

But it looks really angular and jagged. How can I make it look more smooth and curvy, like an actual hyperbola?

Lisp

doing things in parallel is generally faster than doing them sequentially (this is why gpus exist)

even assuming the sum of all your download speeds is hard capped at some number, the amount of time it takes to negotiate connections makes an impact. This means if you're downloading a very large amount of small files, doing it in parallel will speed things up by a lot, while if it's a handful of very large files the difference will be small

Look up the linspace function,

How is it not? What are you retarded?

Can I write low level (freestanding) code in Lisp?

Turns out I was just zoomed out too far.

Attached: 46a.png (645x729, 97K)

Lisp is (still) the most powerful programming language.

I mean, I have a working solution. It's just suboptimal.

#include

int main()
{
std::cout

>not using namespace std
>2nd attempt

Functional programming is inherently impractical. You have to copy a data structure every time you change it. Adding an element to a list? O(N) time. Updating a value in a hash table? O(N) time.

Short answer: no.

Linked lists are embarrassingly slow even for their O(1) operations
>youtu.be/YQs6IC-vgmo

you mean pure functional programming.

That's what unrolled linked lists (and previously cdr-coding) are for.

>DEFINING ALL NAMESPACES AS STD
HAHA YOU ACTUALLY BELIEVE IN DOING THIS.
SUCH A BAD PRACTICE, GOOD GOD

>only printing to stdout
>doubles down on retardation
k

Still slow af. It's literally Cache Misses: The Data Structure

The hit rate for an unrolled linked list on a modern processor will be about 80% that of an array.

neat
Oneechan is back.
github.com/KevinParnell/OneeChan

JavaScript rocks!

Attached: js-rocks.png (1000x494, 369K)

That's of course completely wrong. Just the pointer overhead means you're going to fit 50% less in your cache for pointer-sized data.

...

>I don't know what an unrolled linked list is

are breaks actually helpful?

which app is this

Attached: xRl7X2[1].png (1280x800, 488K)

there are other namespaces you could use, defining the all as std is bad practice.
what if you wanna use another one in the same program.

sometimes

Attached: mtgp.jpg (1600x1064, 417K)

So you divided your 100x slowdown by 4 in return for wasting half your space on empty nodes and pointers instead of just pointers. Bravo, much better.

Is Florida a shithole?

Yes, it is. Don't ever move there.

>determine the length of a zero-terminated C string in O(1) time

Attached: 1458872142395.png (645x1260, 197K)

int i;
for (i = 0; i != 100000; i++) {
if (!ptr[i])
break;
}
/* length is now i */

This will at most do 100000 comparisons. That is a fixed number and thus O(1)
Obviously doesn't work if longer than 100000 chars but w/e

string[0] = 0;
return 0;

Should I learn Pascal or Python for making fun, useful personal projects and the occasional game?

t. guy who learned a bit of Pascal 16 years ago

>That is a fixed number and thus O(1)
It's O(100000), go back to school.

I was thinking about how we might write an algorithm for decentralized posting on a message board, as opposed to a federated single writer with distributed CDN.

To consider the algorithm functional, we need to be able to validate the authenticity of the contents of a post, and to validate the authenticity of the timestamp of a post.

With blockchain-based methods we run into the 51% problem in multiple senses.
First in that it is very simple and easy for a small number of posters to achieve this if we limit the scope of a blockchain to a single thread.
Second because a plurality of users may not actually want to devote resources to threads they're not interested in (i.e. mobile users) if we widen our scope to a whole board.

Python
at least it's still alive

Python. Pascal is ded.

Constant factors are ignored in Big-O notation
>go back to school.
go back to school.

Try Pharo.
it's pretty cool.
pharo.org/

Learn bash instead. Bash 5.0 is out soon and adds some new features and fixes.

Attached: Screenshot_2017-07-08_00-10-31.png (923x1163, 114K)

Do this but for SIZE_MAX

Bash scripting has its purposes, but it pales in comparison to what python can do. C is better IMO

The length of the string isn't constant, idiot.

Python. I also learned Pascal a long time ago.

You are looking at things from the wrong perspective.
You will never need to determine the length of a C-string, because the length will be given to you by functions like the Linux kernel's strscpy()
If you write your code around the idea that you will *always* know the length of strings, then you will never need to determine the length of a string.

wouldn't know, all I do is take breaks and I don't actually type code, so I can't tell the difference

It's seems like you could just bundle those up in one struct at that point, maybe call it std_string or something.

Sometimes working on the same thing continuously will result in tunnel vision. Taking a break will allow you to occasionally look at the bigger picture and reduce the effects of tunnel vision on your work.

Hello I'm here to shill
nim-lang.org/

Goodbye

Attached: .jpg (640x641, 19K)

then you would be forcing every string to have an extra n-word integer, which embedded and concurrent programmers would not be happy about.

What is the best font for coding purposes?

I don't expect a wall of text but if you're going to shill something you should at least tell people what the fuck it's about

>forcing every string to have an extra n-word
I see nothing wrong with this.

Attached: 1532751119565.png (1011x874, 465K)

So your C program would '''know''' the size of a string in more of a theological sense rather than actually having that length stored somewhere?

>TAD

I can think of many cases where this is isn't the case. E.g. command line arguments from argv.
No, because you can keep the normal string functions while extending the library with additional functions that operate on extended strings. In fact, take a look at string libraries like sds: you can extend strings while still retaining backwards compatibility with standard library functions.

not him but
>simple python syntax in a compiled to c language
>not-coupled and optional GC that's also highly tunable
>style agnosticism with a choice of brackets or whitespace
>an emphasis on meta-programming with custom and easily defined operators and macros
>A different more minimalist approach to OOP.

finally, the death of phone posters is at hand! may Terry bless your work user.

the length is stored when the string is made, as strings in other languages are, the difference is that the length is not joined at the hip with the string.
You receive the strlen from strscpy() and you put it wherever you want.
Maybe you have a lot of strings of the same length. Maybe you don't actually care about the length of the string and the rest is zero-padded.

C has always been about giving the programmer the flexibility they need to do whatever they want.

If you're afraid of buffer overflows from an input stream then you can just artificially limit the input length and return an error when the buffer size is exceeded.

what does that last line mean?

Macros and template metaprogramming are widely regarded in C++ as a massive failure. Why should I expect you to do it any better?

Metaprogramming is the only reason C++ is still around.

no class autism, just types.
nim-lang.org/docs/tut2.html#object-oriented-programming
Nim allows Lisp-like control of the AST.
And templates are just AST manipulation.
nim-lang.org/docs/tut2.html#templates

please be trolling

>Maybe you have a lot of strings of the same length
So you have a shared length between a bunch of strings? Then I guess you would pass around a pointer to that shared length in addition to a pointer to your string data? That sounds like a fascinating '''optimization'''

I would argue that C++ is still around because of a plurality of codebases written in C++ during a time when other languages were either a joke or still a twinkle in someone's eye, and the sheer level of first-class support for C++ by hardware vendors and (corporate) library developers than anything else.
Incumbency is a hell of a drug.

I suppose it depends on how much value you place on that extra 4 or more bytes for every string that needs to be pulled into cache.

But C++'s gimmick at the time was it having """""""""""proper"""""""""""" meta-programming over C's pre-processor wank.

Good thing pointers don't take up any cache space.

I don't claim to be that old, but I hear it was because object-oriented and exceptions were "the hot shit"

I don't see where this misunderstanding is coming from.
N strings + 1 pointer vs N strings + N lengths for strings of the same length?

This gimmick was discovered by accident and has only received proper attention after C++11.
I don't think it's an exaggeration to say that the "revitalisation" C++ has experienced in recent years is predominantly a consequence of embracing metaprogramming.

Is there even a book on SMTs, or are they too state-of-the-art for there to be proper literature on them? If so, what papers should someone wanting to do research on SMTs read?

Because you can't associate N strings with their lengths using 1 pointer. You actually end up with N strings + 1 length + N pointers to that length vs. N strings + N lengths.

Hows your language doing, /DPT/?

Attached: IMG_20181126_085833.jpg (1200x1652, 270K)

>1.1%
Not bad I would say.

pretty good, it just got an update 4 days ago.
>swift still losing to Obj-C
W E W
E
W

>C/C++
Now that's a fucking retarded chart.