/dpt/ - Daily Programming Thread

glenda waifu edition

Previous: What are you working on, Jow Forums?

Attached: glenda_2.jpg (1910x2208, 653K)

Other urls found in this thread:

bugs.launchpad.net/gcc-arm-embedded/ bug/1502611
twitter.com/AnonBabble

>Posted before the bump limit
Delete this invalid thread and kill yourself.

JavaScript rocks!

Attached: javascript rocks.png (1000x494, 134K)

Fuck off, there's literally nothing wrong with posting a new thread before the bump limit.
You posted a new thread before the bump limit, good job retard. Kys


First for C

Attached: milkies.jpg (3840x2160, 819K)

>windows
>garbage post
Like pottery

Plan 9 deserve a thread on its own. Don't post glenda unless you are making a Plan 9 general please

Second for C! First for C, C is the best language. Seriously, it's such garbage, it's not nearly as good as Java. Three cheers for the best language by far, C! Java is trash compared to C

Attached: i told you dog.png (240x210, 57K)

nth for nim and these sperglords being complete queers

>that atrocious C style
>windows 10
>integer overflow UB
incredible

Why do C shills suck at their own language?

>>>the windows 10 is bad argument again
Fuck off, there's literally nothing wrong with Windows. Seriously, it's such a terrible operating system. Absolute shit compared to Linux. You should never use Linux, Windows is just plainly superior. Especially for development. It sucks how Windows doesn't have development features built in. You have no idea how much I enjoy my telemetry whilst'd've'th'll the corporations all spy on me. That is, after all, why I exclusively use Linux, and have never touched Windows in my life. The screenshot I posted earlier is clearly fabricated to smear my good name. Hence why I am the one who posted it.

Pic related you fucking troll

Attached: 7daa466a816f5bd0f93b11cb5195eeeddbd7.jpg (3840x2160, 838K)

Use an anime image next time.

Attached: squirt.png (1280x960, 1.32M)

most of us don't, it's the winshits that fuck everything up for the rest of us.

Attached: 2019-01-22-182941_3120x1920_scrot.png (776x927, 95K)

Employed Haskell programmer reporting in

Attached: 1524969530506.png (750x750, 76K)

you must be new here, glenda is best girl

Attached: glenda_3.jpg (1910x2643, 303K)

macro expansions are a hell of a drug
im = (((i) - (((((J_sub)->data_space))->ix))) /
(((((J_sub)->data_space))->sx)) +
(((j) - (((((J_sub)->data_space))->iy))) /
(((((J_sub)->data_space))->sy)) +
(((k) - (((((J_sub)->data_space))->iz))) /
(((((J_sub)->data_space))->sz))) *
(((((J_sub)->data_space))->ny))) *
(((((J_sub)->data_space))->nx)));

Attached: average nim programmer.png (341x520, 343K)

>Single line if statements with the body on the same line as the conditional
>Mixing next-line single line body if statements with same-line body else branches
>single-line body on same line as the for iterator
>inside a single line if statement
Your styling is inconsistent and objectively wrong.

>winshits
Fuck you. Literally stop making fun of Windows, it's a great operating system. It's shit, don't use it.

user, that's a rabbit.

Rabbits can be girls, too! No rabbits are ever girls you idiot, all rabbits are boys. Don't be rabbitist, rabbits can be girls. They can't.

But what if you fuck the rabbit?

Attached: 1478132750134.png (512x499, 342K)

yeah, fucked up that one else statement near the bottom of the pic. Otherwise, there is literally nothing wrong with this code.
>prove me wrong.

user, for what language are you writing an interpreter?

The if->for->body is still terrible.
Also bracketless if/else expressions are bad. If you HAVE to use them, yes, the K&R guideline says do that. But you shouldn't. If it branches, use brackets.

>ligatures
>no .clang-format

JavaScript rocks!

Attached: js-rocks.png (1000x494, 286K)

Why can't I find a programming language that clicks

What have you tried?

And they say Lisp is bad...

it's a lisp dialect. I've shilled it a few times here before, it's designed for the specific purpose of generating HTML and CSS. It supports CGI scripting, as well. The interpreter supports lazy evaluation, though the only thing that uses it right now is if statements. I wrote it in a way that's pretty generalized, so I should be able to take this core code and use it to make a lisp DSL for anything I want in the future.
I hate writing CSS, I hate writing HTML even more, so I've made something that will allow me to look at that garbage ever again. That's the general idea.

It’s better to just make your own compiler if you want a language that doesn’t exist. If you want a concise and fast language make it. If you want some kind of close to Assembly esoteric language make that. If you’re not concerned about performance you could even make a shitty interpreter instead of a compiler. More people need to take the Terry route and take matters into their own hands if they want a language with certain features.

Should a lexer return a string or a (for example) parsed integer? Why do Go, Rust, and Swift all wait until the parser or even later to call stoi?

why did i read this as "why can't i find a programming language that likes dicks"

>setting yourself up like this
>she wants (You)'s
congrats, the hrt is working

How do I make and use an int of infinite size in Assembly? Pls no bully. I need a way to use precise numbers that are larger than longs.

>larger than longs
bbc jamal;

>Assembly
Why?

what are you talking about, i didn't hrt anyone
i would never hrt a fly

Everything should be a string for as long as possible. I quit passing references all over the place in compilers and now just use strings with reference tables.

>Also bracketless if/else expressions are bad
completely subjective. I find that the brackets for one-liners is just visual clutter, and the code is easier to read when they aren't there. I don't hold my contributors to any particular standard on this front, though there are rules about indentation, placing open brackets on the same line as control flow statements, etc.

you'll want the original text of the token in order to spit out error messages, so you're going to hold onto it either way. I guess that's an argument that could be made. My lexers always do conversion of literals right off the bat, then store the original text alongside the conversion. Maybe that's just me, not completely sure.

that would be Go

Speed

Attached: E70E0C14-3BC7-4787-ABAD-ABB53A428955.jpg (750x436, 108K)

Any compiler on the planet will generate faster asm than you could ever hope to write.

Runtime speed dipshit

That’s blatently untrue. Don’t assume your own incompetence is universal.

Why should it be a string as long as possible? Is there a reason other than emitting error messages?

Generally speaking if you want to use data larger than your largest register then you need to use memory. Let N be the number of bits in a machine word on your architecture. What you'll need to do is allocate an array of machine words and use them as base-(2^N) digits. So, for example, on an 8-bit system, your digits would be base-256. And there's your unlimited-size integer data type. You have to implement all the math yourself in terms of operations on the base-(2^N) digits. So for example if you had to add two unlimited-size integers as defined in this way, you'd have to manually implement addition by adding each pair of digits and carrying where necessary.

Anything of perfomance importance should be precompiled, unless you're diving into Inspector/Executor JIT recompilation research, which you aren't, so don't bother pretending.

The few examples of hand-written optimized ASM do not constitute evidence that writing it yourself yields more performance. Moreover, the few examples of that which DO exist are tailored for specific architectures, most of which are massively outdated, and make no effective use of modern many-core or parallel processing.

Anything you think you can write in asm effectively, any modern compiler will outdo you a thousandfold over.

pretty sure he meant faster executing, not just faster made

The compiler will generate sufficiently-optimized assembly code for nearly all purposes. It very rarely produces perfect code that couldn't be optimized further by hand. However, it's rare that the difference is meaningful.
The only time I've ever had to hand-write a significant amount asm was when I had strict (sub-150ns) timing requirements for some register writes on a microcontroller. There was a decent amount of calculation that had to be done before doing the writes, so I was able to frontload all that calculation, store the results, then write the results to the registers in back-to-back instructions. There was no way to express this exact behavior in C without gcc moving shit around on me, so it had to be done in assembly.

Super performance critical code is written in Assembly by human hands dipshit, including infinite integer libraries.

I should have read what I was replying to before I posted
Still not every compiler unrolls for loops to you

He probably doesn’t even know what a register is.

Timing related work on a microcontroller is probably the only place hand-writing asm makes sense. But that's not necessarily total execution speed as much as it is balancing critical timing sections of work.

Repeating your unsubstantiated claims doesn't make you more right.

-funroll-loops
Granted it's not a guarantee, it's more of a suggestion, but if you can hand-write the iteration space of a loop in unrolled asm, the compiler very definitely can.

#include

struct mommy {
int milkies;
};

int
main(int argc, char *argv[])
{
struct mommy mommy = {};
for( ;; )
printf("I have %d milkies from've'th mommy\n", mommy.milkies++);
}

fixed code formatting

>it's not a guarantee
Then human hands can do it faster.

>the compiler is faster except when it’s not

who is bored and wants to review my code?

>Write a program entab that replaces strings of blanks by the minimum number of
tabs and blanks to achieve the same spacing

#include

#define TABLENGTH 8
#define BLANK '-'

int main(void)
{
int c, pos;

pos = 0;
while ((c = getchar()) != EOF) {
++pos;

// count consecutive blanks if we encounter one
if (c == BLANK) {
int blanks = 1;
while ((c = getchar()) == BLANK) {
++blanks;
}
// how many character away are we from the next tab stop (nts)?
// can we use a tab without exceeding the number of required blanks?
int nts;
while ((nts = TABLENGTH - ((pos - 1) % TABLENGTH))

Attached: cpro.png (2000x2810, 150K)

>++pos
Why do people do this? Why not put it on the right like a decent human being?

Let me repeat myself:
If you can hand-write the iteration space of a loop in unrolled asm, the compiler very definitely can.
I say "It's not a guarantee" because there are iteration spaces that cannot be determined ahead of time, human hand or not. If I didn't say "It's not a guarantee" your pedantic ass would find an example of non-deterministic iteration space and say SEE THE COMPILER COULDNT DO IT

If you are on a modern machine, and do not need to fuck around with nanosecond-order scheduling, whatever asm you write will not compete with anything from a modern compiler.

Changing scheduling != Going faster

>But that's not necessarily total execution speed as much as it is balancing critical timing sections of work.
yeah, that's my point. asm programming isn't just about optimization, it's about granularity of control. C is about as granular as you can get with a high-level language, but sometimes it isn't enough.

too many different loops inside that if statement. You're probably overcomplicating the task, or you should divvy it up into separate functions. Make them static inline if you're really that anal.

>asm programming isn't just about optimization, it's about granularity of control.
I don't disagree with that at all. Claiming you can hand-write faster asm than a compiler (not you, the other guy) is a bold fucking statement and pretty much guaranteed wrong outside of tremendously specific circumstances.

They shill C because they suck at it.

Then how come I have asm files that when executed run 20% faster than C versions? If your absurd claims were true performance critical libraries wouldn’t be written in Assembly.

And before you ask yes I was using optimization flags to compile the C.

Functioning exercise / 10

eh, it's certainly possible. GCC is far from perfect, depending on the target platform. I think they finally fixed this bug, but for a long time in GCC 7, the piece of shit literally spat out useless instructions for Thumb 2 (shit like multiplying two numbers together then never using the result or comparison flags for anything), enough that there's a performance impact.

I genuinely don't believe you. Post an example.

Nth for J
+/(*(-:~.)@:q:)"0~.>,&.>/(,2&(+/\)@:(0:,],0:)&.>@{:)^:50

Attached: Brain_Expand_Meme_FizzBuzz.jpg (680x1152, 452K)

;least common multiple of numbers 1-20
%define accum ebx
%define divisor ecx
%define max 20
section .text
global _main
_main:
mov accum, 0
retry:
add accum, max
mov divisor, 3
check:
xor edx, edx
mov eax, accum
idiv divisor
cmp edx, 0
jne retry
add divisor, 1
cmp divisor, max
jne check
fs outln, rbx
call _exit

A lot of poorly spat-out asm (*especially* from gcc, the thing is a monolithic elder-god that nobody fully understands) is from poor operation choices on behalf of the programmer.
EX: The asm output between using integers to do float division, or using floats to do float multiplication (like "1.0 / 3" vs "1.0 * 0.33") is comparable, but the instructions themselves that get generated are massively different. Multiplying by a float is ENORMOUSLY faster (on intel cpus), but the compiler has absolutely no idea how to tell what level of precision you want (can I turn 1/3 into 1*0.33? 1*0.3333? 1*0.3333333?) and so has to just stick with the shittier instruction set.

It's not a magical infinite beast of knowledge and there's constant research into how to do ever-more-clever bullshit with compilers, but it's a maybe one in a billion shot to write better asm than modern compilers (short of very specific hardware limitations or scheduling tricks)

>but the compiler has absolutely no idea how to tell what level of precision you want (can I turn 1/3 into 1*0.33? 1*0.3333? 1*0.3333333?) and so has to just stick with the shittier instruction set.
That's one of the problems. The programmer knows more specifically what he wants to do so if he's proficient he can write Assembly better tailored to his use. Compilers never had the edge on speed compared to skilled Assembly programmers. The one thing compilers have going for them is they're (sometimes) more portable.

It only matters for errors

Gunna post the C code you're comparing it to? Compilers are smart but they have to work off what you provide them.

Well sure. And again, under *very* specific circumstances, human-made asm snippets have plenty of potential to be much faster than the compiler. If you're working on microcontrollers or other, by modern standards, horrifyingly constrained platforms, hand-writing asm is probably a smart move. Like you said compilers are intended to be a way to quickly and efficiently generate portable code, and if you're dealing with some 8bit processor with a whopping 1kb of RAM then yeah, GCC and Clang aren't gunna be your friend. Which is also why they support inline asm, for those oh-so-critical bits that nobody except you working on this incredibly specific platform with incredibly specific constraints would know of. Compilers are abstract generalization machines, so they don't know about hardware limitations or bottlenecks.


Here's all I'm trying to say: You, the general programmer, writing general programs, of any significant verbosity, and *especially* with any significant data structuring, are not realistically going to compete with a compiler writing asm unless that is your sole, dedicated, full time job on very small performance critical chunks of code.
Is it possible? Yes. Is it generally? Absolutely not. Are you, the random stranger on the internet, working on a modern x86 platform, going to yield better asm? Extremely improbable.

>A lot of poorly spat-out asm (*especially* from gcc, the thing is a monolithic elder-god that nobody fully understands) is from poor operation choices on behalf of the programmer.
It's not related to programmer decisions in this case, it is (I say "is" because apparently it still isn't resolved) just a bug in the compiler bugs.launchpad.net/gcc-arm-embedded/ bug/1502611
That's the point I think the other user is trying to make. The crazy super-advanced optimizations that compilers can make are mostly offset by bugs like the one I linked above. It ends up being kind of a wash, imo.

This is the C code I'm using. It gets shittier performance than my Assembly.
#include

#define true 1
#define false 0
#define bool int

int main() {
int accum, divisor, max;
accum = 0;
max = 20;
while (true) {
bool check = true;
accum += max;
for (divisor = 3; divisor < max; divisor++){
if (accum % divisor != 0) {
check = false;
break;
}
}
if (check) break;
}
printf("%d\n", accum);
return 0;
}

your code is definitely too complicated
i did your homework for you, so post your feet
#include
#include

#define TABLENGTH 8
#define BLANK '-'

int main(void) {

char c;
int blanks = 0;

// replace TABLENGTH consecutive BLANKs with a tab
while ( (c = getchar()) != EOF ) {

if ( c == BLANK ) {
++blanks;
if ( blanks == TABLENGTH ) {
putchar('\t');
blanks = 0;
}
}
else { // c != BLANK
if ( blanks ) {
do {
putchar(BLANK);
}
while ( --blanks );
}
else {
putchar(c);
}
}
}

exit(EXIT_SUCCESS);
}

should i get programmer lipstick to go with my socks

you had better not be the user that was harping on me for my C code style earlier, this is the ugliest shit I have ever seen in my fucking life

your code produces wrong outputs

dddd---------dd-----------ddd-------d
>dddd -d ---dd-------

I'm writing functions based on the compound interest formula in mathematica

on wolfram-alpha I can just type in a (mathematical) function and it'll output a graph for me.
How can I set a variable in mathematica such that I don't supply it, rather, mathematica generates a range of values based on changing that variable?

in other words, how do I make a programming variable behave as a mathematical variable in mathematica, I guess

the variable in question is shares_

Attached: bbbbbbbbbbb.png (708x526, 31K)

Neat, I also hate writing pure html and css. Are you using anything as a guide?

Because it's not a lexer's job to transform anything, just to read and mark token boundaries.

oh, shit!
i think it works now
#include
#include

#define TABLENGTH 8
#define BLANK '-'

int main(void) {

char c;
int blanks = 0;

// replace TABLENGTH consecutive BLANKs with a tab
while ( (c = getchar()) != EOF ) {

if ( c == BLANK ) {
++blanks;
if ( blanks == TABLENGTH ) {
putchar('\t');
blanks = 0;
}
}
else { // c != BLANK
if ( blanks ) {
do {
putchar(BLANK);
}
while ( --blanks );
}
putchar(c);
}
}

exit(EXIT_SUCCESS);
}

I accept that argument if you're working on newer (by compiler standards) architectures like ARM. Compilers are software and software has bugs. That's kind of a fun one.
What I said before does still hold true though, simply using a non-optimal operator isn't necessarily resolvable by a compiler (but then again, if you could feasibly write competing asm, you wouldn't make such a mistake)

Pretty much identical to -O2 on gcc 8.2

Attached: asm.png (437x449, 32K)

nope, kinda just wrote it from scratch with what I already know. Never read a book on compilers or whatnot, just pulling from past experience. The tokenizer is a huge bottleneck at the moment, but it's super simple and very easily extensible, so it's gonna stay as-is until the project hits 1.0 and I'm completely satisfied with the built-in functions.

>newer (by compiler standards) architectures like ARM
ARM has been around in one way or another since the 80s, and Thumb 2 has been around since 2003. I agree with your point, though. No matter what you do, a lot is going to be left to the programmer, and there's very little to gain either way.

why did you make the code formatting worse

It's similar but not exact. My Assembly version still runs faster.

Reminder that if you've been writing code for more than three months and you still don't understand monads, then it's your own fault and you're simply a bad programmer who doesn't try hard enough.

This is more trouble than it's worth to do in assembly. I would implement bignum in C or something else that can target your arch and just link the compiled code so you can call it from your hand-written ASM. If you don't *actually* need infinite precision, but just better than long, then you can try using SSE instructions. Looks like its been done before from my minute of searching but it may also be more trouble than it's worth.

>implying good programmers use moan ads
Isn't it the other way around? If you've been writing code for more than three months, and you're still lost in academic shitpost-land, then it's your own fault and you're simply a bad programmer who doesn't try hard enough.

>ARM has been around in one way or another since the 80s, and Thumb 2 has been around since 2003.
well sure but ARM only got seriously popular (relatively) recently. Gotta give some time for all those kinks to get worked out.

I genuinely don't believe you've tested this correctly.
For one, the only differences are your lack of cdq and swapping test with cmp, but since you cmp to 0 those calls are equal cost.
For two, even if you counted clock cycles, both of these sets would be (excluding the print statement) completely in the same number (again, excluding the cdq). In terms of raw numbers, because of that cdq, yours may in fact be faster. By a number of CPU cycles countable on one hand.
I'd very much like to know how you verified yours as being faster, because nothing I see would even imply that.

>Compilers never had the edge on speed compared to skilled Assembly programmers
Arguably, a human physically cannot hold all vectors of optimization in their heads for every session of programming, but Compilers can.

you'd probably have better luck on

in Haskell you literally *need* monads to get it to generate code that will interact with anything outside of its own runtime.

Even sometimes just copy-pasting code can make the program faster, so I doubt compilers hold all the vectors of optimization inside them.

You don't. There have been other ways, but Monads have been established as the standard.

Humans can optimize code based on their expected use cases which computers can only do if they perform some runtime analysis based on typical inputs

you're describing the purpose of the inline keyword

Which doesn't do much in modern compilers.