/dpt/ - Daily Programming Thread

What are you working on, Jow Forums?

Previous thread:

Attached: dpt-rms.png (855x919, 450K)

Other urls found in this thread:

en.wikipedia.org/wiki/Magic_number_(programming)
gcc.godbolt.org/z/7Xvjnj
gcc.godbolt.org/z/FTiexY
twitter.com/NSFWRedditVideo

That's not a cute anime girl

float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;

x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the fuck?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed

return y;
}

Attached: OpenArena-Rocket.jpg (1280x1024, 343K)

install J

Go to sleep, John

this
very ugly anime girl

Sorry, have this instead.

Attached: comfy_sleeps.png (425x585, 158K)

wonder if johnny is one of our local racket posters

Why is college such a scam?

That's some pretty cute text, accepted.

muh free market

muh rite of passage

muh actually getting a job that doesn't suck

Let it be known that on this day, C and Haskell became friends

Attached: 1493178253912.jpg (852x973, 475K)

CHadskell

Attached: karen haskell.png (1280x719, 818K)

What about Swift? We have sum types and malloc.

decent language locked to a shitty ecosystem and OS.

Nah, it's available on your shitty OS too.

>Binding everything from C

oh, had no idea XCode was on GNUL.
pretty neat.

Is it available on TempleOS?

thoughts, Haskell?
on one hand:
>malloc
>sum types
>option types
>meaningful difference between class and struct keywords
>first-class functions
on the other hand:
>interesting but overengineered access control
>reference counting scheme requires explicit weak references
>assignment does not return a value
>no increment operators
>used almost exlusively for appleshit with

Attached: 1492832377473.png (1052x1342, 769K)

i have a Paper about my Compiler but i dont have a Computer to make it. i am trying to find a big Prime Number but i don't think the Solution uses Numbers. 11 seems IMPORTANT. everything is 10^x Rings

10^(2x) -1 % 11 = 0
10^(2x + 1) + 1 % 11 = 0


for any X that is a Whole Number. test it YOURSELF

it's not like Haskell doesn't have malloc, but Haskell is definitely weak on memory management in pure code
i knew a pajeet who liked swift

it feels like Swift just doesn't have a real community outside of apple stuff. Even if it's a good language, it's basically what C# is for microsoft.

Yeah, it follows from the Binomial theorem. Works for any 2 consecutive whole numbers, not just 10 and 11. 2^2-1 % 3 = 0, 2^3+1 % 3 = 0, 11^2-1 % 12 = 0, 11^3+1 % 12 = 0, etc

You may be having a stroke. Seek medical attention.

>what is the over 5000000 games built with unity

>game development
>real programming
lol

game engine programmers are often the most talented programmers around.

lol

There's not even anything worth using the binomial theorem for.
10^0=1=1 mod 11
10^1=10*10^0=10= -1 mod 11
And then it just repeats

who are "real programmers" to you?
And if you're going to give me the generic "kernel devs", atleast explain why.

t. person who had never had the misfortune of having to work with a game engine

what does game engine development have to do with game development? unity may not even be primarily built in c# for all I know

Real programmers are people who write real software, not play things for children.

>what does game engine development have to do with game development?
hmm

It really doesn't though. Most people that make games don't look to see what is happening under the hood.

what is "real software"?
Webshit?
maintaining enterprise legacy shit?

yeah i'd rather waste my time programming something boring nobody cares about that some ridiculous lesiure activity, what kind of fag wants to have fun

LMAO these people don't even realize they're making a virtual machine

For an example, look into Speccy, one of the most powerful pieces of softsare ever written.

is that a joke

You are a joke.

Children, that's who.

something that reports your computer specs is a basic project for cs students

and the only reason anyone cares about their system specs is so they can play video games anyway it's not like anything else you do on a PC is using your hardware

Game engines are written in C++.

>t. person who had never had the misfortune of having to work with a game engine
>Game engines are written in C++.
Both statements are true. I wonder if there's any connection...

I wrote mine in C

hm, was not aware of this. I don't play games, nor do I write games, so I guess it kinda makes sense that I didn't know.

>i = * ( long * ) &y;
Why not just (long) y;?

Doesn't the latter convert it while the former
directly reinterprets the memory?

Yeah, but the former is UB.

>the former is UB.
Technically, but in practice, compilers specifically take care to not break code like that.

So what if it's undefined? What matters is whether the compiler generates the code you want it to generate.

>So what if it's undefined?
So it can break silently on the next version of the compiler. And in this case it won't, because they specifically accommodate such abuse. But knowing that this is UB is imperative to understanding what you are doing and weighing the pros and cons.

>i = 0x5f3759df - ( i >> 1 ); // what the fuck?
This is my desktop

Attached: Q_rsqrt.png (1920x1080, 36K)

Personally I'd compile this function in a separate translation unit with things like strict aliasing disabled. It's true that some retarded compiler faggot might suddenly decide to break perfectly good code because "lmao its undefined so it can't happen therefore its dead code therefore I can delete it and therefore I can delete the rest of the code too and therefore I can delete the entire function" or something equally retarded

>abuse

lol

How did they come with this magic number? Trial and error?
If there's a calculation method then it should be included as a macro instead of being a magic number, today in modern C++ you could even do it normally just by adding constexpr

what even makes a piece of code "undefined"? why can't you define it?

What's magic about this?

It's hexadecimal, it's a numbering system just like decimal. It's making some random computation.

recursion is shit

*not

>implying it's not abuse
Using a language feature in a way that is explicitly unsupported is abuse. No one said it can't be a valid choice from a pragmatic viewpoint, though.

Lisp is the most powerful programming language.

en.wikipedia.org/wiki/Magic_number_(programming)
it's a constant and it's not specified where it came from

>How did they come with this magic number? Trial and error?
Somebody wrote a program after the algorithm became well known to find the optimal magic number and it turns out the original is not in fact optimal although it's pretty close.

Or you can just use memcpy and not be a retard.

>what even makes a piece of code "undefined"?
The fact that the language standard doesn't define its semantics, and doesn't require the compiler vendor to define them. Consequently, compiler vendors will usually keep such details to themselves, because they usually don't want to provide extra guarantees and constrain themselves in future implementations. In practice, the behavior of such code simply arises from the implementation details, instead of the implementation being tailored to produce a specific result for this case.

>why can't you define it?
Either because there's no consistent and sensible way to define the result, but it's hard or impossible to detect the case at compile time, or because you don't want to define it, because "who wants to do this, anyway?", and leaving it undefined gives the implementation more freedom to optimize

>use memcpy
And possibly incur unnecessary overhead in a function that you're specifically trying to micro-optimize to death. Yeah, that makes sense.

This. Comments exist for a reason, your code doesn't have to spell everything out like it's trying to teach children.

>Using a language feature in a way that is explicitly unsupported is abuse
some UB is defined in compiler-specific docs, and

let's go through that code again, nice and slow. Recall that C, like most languages, uses copy semantics:
>i = * ( long * ) &y;
>copy into i the value stored at the address where y is stored, where the address of y is said to be a pointer to a long
here, I'll make it even more clear to you
>copy the data in y into variable i
now, what does memcpy do? copies data from one place to another? hmmm... it's almost as if that's the exact same thing as an assignment when dealing with scalar values.

Attached: rtx-off-the-smug-face-rtx-on-is-this-meme-36184370.png (500x775, 198K)

Nobody but compiler autists actually give a shit about some C standard. It doesn't run any code and isn't worth the paper its printed on. The language is always whatever the compiler accepts.

t. retard

>inb4 the optimizer will take care of this
Wasn't necessarily true back in 1999, and it's every bit as unreliable and compiler-specific as relying on the optimizer to not mess with your UB code. At least in the latter case, you don't dance around this issue.

memcpy can be optimized to a single instruction, dumbass.

>can be
Why take the chance. If you want it to be explicit write it explicitly.

>some UB is defined in compiler-specific docs
Yep, and it's usually exactly the kind of UB that people abuse so often that compiler vendors have to support it to not break lots of legacy code. It's still abuse for a reason that I've explained.

>write UB
If you don't want to take a chance, write some inline assembly.

>memcpy can be optimized to a single instruction
Yes, fucktard, and that's why I said "possibly", and I even realized what a monumental retard you are and elaborated on it here preemtively: And still, you made that worthless post. Well done.

>it's almost as if that's the exact same thing
It's not because memcpy/casting to char * doesn't violate strict aliasing.

t. compiler autist who thinks its reasonable to delete a NULL check because null dereferences are undefined and since the variable is used the variable can never be NULL

>If you don't want to take a chance, write some inline assembly.
For what benefit? If it's a closed-source project redistributed through executables produced by a compiler which you know generates the correct code, and you almost certainly know it will continue to work in the future, and you know that you will see the problem on the first test run if it ever breaks, why bother?

You should be using compiler flags to get rid of strict aliasing anyway. Who the fuck does systems programming with strict aliasing enabled? Holy fucking shit. That stuff is just some fortran shit the standards autists kept because then it'd be competitive in scientific code.

It's probably disabled on the entire Linux kernel.

>compiler autist
Was that supposed to be an insult?
>reasonable to delete a NULL check because null dereferences are undefined and since the variable is used the variable can never be NULL
Lmao retard.
Null pointers are allowed, so checks for null pointers don't get deleted.
However dereferencing a null pointer is undefined so it's valid for the compiler to assume it's never null AT THE POINT OF DEREFERENCE. It's that simple.

>which you know generates the correct code
this is difficult to know
>and you almost certainly know it will continue to work in the future
this is impossible to know

The proper way to do it then is assembly. No UB, because you know exactly what each instruction does, and a guarantee of optimization.
Because it's the correct thing to do and doesn't rely on UB. It's also trivial, especially given you already know your target platform as you write.

>autistic tard #1: you should always follow the standard regardless of any pragmatic considerations
>autistic tard #2: lol who even cares about the standard, the language is defined by compiler quirks
This truly is /dpt/ in a nutshell. This pattern repeats itself for every given subject.

strict aliasing only deals with aliasing.
it's unrelated.

Linux isn't written in standard C, nor does it historically target compilers other than GCC. What's your point?
>brings up systems programming
>when topic is code from a video game that out of pure laziness and incompetency leverages UB

>The proper way to do it then is assembly
That's far better than your initial suggestion, but like I said... >b-but it's the correct thing to do
I agree, but switching to inline assembly just for some type-punning? Maybe if it was an open-source project, or if that kind of hack wasn't so common that it's de-facto non-UB.

>strict aliasing only deals with aliasing.
>it's unrelated.
Actually, aliasing-related optimizations are exactly the reason type punning like that is UB.

>Was that supposed to be an insult?

Sorry, let me try this again.

t. anal retentive standards lawyer

>so checks for null pointers don't get deleted.

Literally happened to me some years ago. Also suffered through deleted overflow checks, completely invalid reordering of code due to completely insane reasons.

Except I'm right and he's wrong. You're retarded if you think its acceptable for compiler autists to break perfectly good code and recite you some bible passage as justification, all in the name of winning little benchmarks at the cost of your program's correctness.

Explain?
Does disabling strict aliasing make casting pointers not UB? Why?

It's not unrelated. You literally created an alias to the memory where the float is stored, and the compiler might decide to be completely insane and do stupid shit because the C standard says you can. This is despite the fact you can statically analyze and prove that the pointer literally points to the float, any idiot can understand this, even the compiler.

quality discussion here guys glad we could have this argument about undefined behaviour

>Does disabling strict aliasing make casting pointers not UB?
Not officially, but the reason why it's UB in the first place is to allow optimizers to assume that two pointers with incompatible types can't possibly point to the same thing, to help it figure out whether or not there's some pointer aliasing going on, because a number of useful optimizations can be performed sometimes when there's no aliasing. If you disable this, the UB becomes technically moot. It won't make the code more compliant, but it will basically make sure it won't break.

>Literally happened to me some years ago
No it didn't or it was a different situation.
if (!ptr) {...}

The compiler is not allowed to delete that check. It's completely valid and defined IF it doesn't come _after_ a dereference.
I bet your situation was similar to this:
int foo(int *p) {
*p = 5;

if (!p) {
return 1;
}

return 0;
}

We dereference p, and then we check for null. But if the null check succeeds then that means we dereferenced a null pointer and that is UB. The compiler assumed that the dereference wasn't a null, and since the variable didn't change then the compiler can deduce that !p evaluates to false and optimize out the check.
And indeed it does:
gcc.godbolt.org/z/7Xvjnj
This is completely valid, because the only way for !p to evaluate to true is if we invoked UB earlier. It flat out does not make sense to emit the check, because there's no defined path that leads to that code being executed, so it's effectively dead code hence DCE kicks in.

Now if we put the null check before the dereference:
int foo(int *p) {
if (!p) {
return 1;
}

*p = 5;

return 0;
}

The compiler can't assume that p is not null, and it's valid for p to be null at that point so it's not dead code and hence emits the check.
It can assume p is not null at and after the dereference though.
And we indeed see that the compiler has not optimized out the check, because it can't: gcc.godbolt.org/z/FTiexY

You need to know this shit senpai.

>Also suffered through deleted overflow checks
This is valid, as signed integers can be assumed to never overflow, hence your "overflow check" evaluates to a compile time constant of false.

If it's still UB then it's still UB and can break in any way it wants.

Huh. I dunno what code caused it. I just know the check was gone. I don't think the pointer was used prior to the check. Maybe the computer used other information to prove shit? I dunno. Problem went away at -O0

>here's no defined path that leads to that code being executed

Well I just have a fundamental with that kind of garbage thinking. I know for a fact that kind of stuff has made it delete overflow checks from my code and I had to force the compiler to assume numeric representation.