Legacy CS decisions that fuck ppl over today

I'm not trying to start a language war here, C's string handling was just the first thing to come to my mind.

Anyway, you got any more?

Attached: fuck.jpg (1280x720, 125K)

terminate your strings, asshole.

PAUSE/BREAK

>mainframe
>proglanguages-as-approximation-to-natural-languages meme
>proglanguages-as-approximation-to-math-notation meme
>separation of declaration and implementation
>standardizing C in a half-assed way instead of balkanizing and effectively killing it
>basically everything retarded due to memory limits
>SQL and basically most embedded DSLs, but it could have gotten much worse
>most W3C decisions

Fortran:

implicit none statement in every file, otherwise data types are assumed depending on variable name

C and Unix.
The entire CS research fields has been pretty much focused on fixing 50 year old mistakes since then.

The null reference. Hoare himself admitted it was a dumb idea and it cost billions of dollars in damages.

>c is evilzzzz!!

systemd

This is the opposite of a legacy decision - Industry favoring mutable state has fucked us over for decades.

Windows directory structure.

Based FPlord

this

Care to explain why?

c and unix is perfect and
>The entire CS research fields has been pretty much focused on
making it pajeet safe

CISC architecture

Dynamic systems (Smalltalk, Lisp) falling out of favour

unironically, great pic and title op

Yeah, no shit Sherlock.
Know why ARM needed NEON and all kinds of CISC additions? Because RISC is useless shit for everyday scenarios.

What's wrong with null references? What would you replace it with?

undefined references

So your issue is semantic? Or would undefined references have behavior different from null references?

current method of web authentication
TCP
the whole DNS ecosystem
idea that CA should provide guarantees like legal entity and anything beyond just private key being on a server with that domain
most of block cipher modes
fork() model
separation of threads and processes
DEFLATE
epoll, inotify
make

I personally dislike the idea of variable-length character encodings, but at least UTF-8 works and I don't have any empiric measurements to prove if the expanded length maters in memory

If C and Unix were perfecr, we wouldn't be wasting countless resources on patching up CVEs directly caused by them.
>inb4 lol just hire better programmers bro!

Maybe/Either types show how embedding this information in the type system can prevent many such bugs.

Worrying about muh variable length encoding is useless at this point, when there already are multi-character sequences that correspond to a single character on screen. I'm pretty sure these retarded motherfuckers will add a turing complete langauge to unicode some time in the next decade.

Attached: file.png (2000x1000, 156K)

To be honest, I don't know much outside latin to see all the edge cases. Input method could easily turn turing complete, but I don't see even recursively nested structures in the encoding.

Attached: vomiting_emoji_2x.png (1480x646, 106K)

>but I don't see even recursively nested structures in the encoding.
Yeah user, I was just joking about unicode becoming turing complete. Like that is the only thing more retarded than what they are already doing.