What are the UNIX design decisions that proved to be wrong or short sighted after all these years?

What are the UNIX design decisions that proved to be wrong or short sighted after all these years?

Attached: 1532101826984.jpg (914x1200, 222K)

Other urls found in this thread:

path/to/file
en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
adamdrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html
web.mit.edu/~simsong/www/ugh.pdf
harmful.cat-v.org/cat-v/unix_prog_design.pdf
twitter.com/NSFWRedditVideo

The pipe operator and grep

ioctl()

You are legitimately retarded

>pipe
You must be joking

>ioctl
Why didn't they have something like netlink sockets from the very beginning? was it very hard to see that ioctl is so retarded?

It’s an outdated tool for data processing

None of them. There were issues with the implantation if the design decisions, but they were all spot on, and still the best we have to date. They chose misspelled short hand in some cases, to face on memory: that was a mistake, but it was also in implementation, not design.

No your an outdated tool for data processing

the permissions model

I mean, there were a few bad design decisions with Unix, but pipe was not one of them.

Not EVERYTHING needs to be a file.

Everything that is wrong with UNIX are the things that don't have a filesystem representation.

Plan 9 was fucking amazing. It was peak UNIX as it should've been.

It could have been an URL though (as in Redox-OS).

what's the diff?
>/path/to/file
vs
>path/to/file

The filesystem is stupid.
'Everything is a file' is stupid.
Device mountpoints are stupid.
Terminal emulation is legacy garbage now.
7-bit signed ASCII is retarded as Terry said.
Virtual memory is retarded as Terry said.
Have different 'users' for programs to run as is stupid.
UNIX permissions are stupid.
Having lots of daemons and 'servers' is stupid.

>The filesystem is stupid.
Why
>'Everything is a file' is stupid.
It's better than crappy binary interfaces that are borderline impossible to debug without another set of crappy binary debugging interfaces
>Terminal emulation is legacy garbage now.
Sure, but what do you suggest instead?
>Virtual memory is retarded as Terry said.
lmao, enjoy your garbage stillborn OS where any poorly written application completely wrecks your OS
>Have different 'users' for programs to run as is stupid.
Nope. It is a security model.
>UNIX permissions are stupid.
Nope, they are pretty clever for most use-cases of permissions and you can always have ACLs for more advanced uses
>Having lots of daemons and 'servers' is stupid.
Why?

>string in
>string out
>everything is a fucking string

Attached: CRu95CHUkAAnDab.jpg (600x450, 46K)

>'Everything is a file' is stupid.
brainlet, I dare you can explain why it is stupid

>Device mountpoints are stupid.
Forgot about this one but holy fuck are you wrong here. Device mount points are so infinitely much better than the retarded garbage you have on Windows where each partition has its own root directory.

Applications, scripts etc. can be completely agnostic to whether or not they're writing to some particular device on Linux. They can't on Windows, and it fucking sucks.

You wrote it yourself: http. Could be anything. Could also add parameters using ? or anchors using # and more.

As long as serial exists, terminal emulation is not obsolete.

>Why
Because its a clusterfuck of unrelated shit.

>lmao, enjoy your garbage stillborn OS where any poorly written application completely wrecks your OS
I should have been more specific I don't mean paging is stupid I mean swap is stupid.

>brainlet, I dare you can explain why it is stupid
Because not everything is a file.

>the retarded garbage you have on Windows where each partition has its own root directory
I think this is the right way.

The terminal should not work in a stupid way for everyone because of that specific use case.

>I think this is the right way.
>He thinks he's thinking
No, you clearly don't.

If it's one thing they got right, then it's this:

Disagree. Anyway I am writing an OS so we will see what happens, maybe I will realize I was wrong or maybe I will be back to prove Jow Forums wrong :^)

>I am writing an OS
>:^)

Attached: df44es.jpg (259x194, 7K)

errno was a mistake

>Because not everything is a file.
but everything can be read from and written to, just like a file, whether a device, file, socket, pipe, they all can dealt with the same way

>everything is a file
>except sockets, fuck sockets

I feel that that is too simplistic a model for 99% of modern day devices. It was okay in the old days when a printer took a string and just printed it out and did nothing else but things are much more complicated now.

At the end of the day you're ultimately just reading from and writing to parts of a giant buffer. If this is too primitive for daily use then you can always add a layer of abstraction on top (e.g. HTTP).

no it's not simplistic, the kernel can abstract any device as a file like /dev/whatever and you an mmap that file and according to your device driver you can do any kind of device specific command in userspace, it's that simple

of course in the old days, boomer engineers would use read and write syscalls to connect with devices, but with memory mapped device files, you unlock an unlimited world and directly connect with your device no matter how complex it is like GPUs and at the same time the kernel doesn't blow up its syscalls with ad-hoc solutions, it's just typical memory operations after mmaping

The usr directory is here because the early unix machine had one HDD mounted as / and when it ran out of space they added another one they as /usr. There's no actual need to keep it separated.

>Because its a clusterfuck of unrelated shit.
But it isn't. The filesystem is pretty clearly defined. It is not by any means a clusterfuck.

en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard

Everything has its place.

>Because not everything is a file.
Only because it hasn't implemented to be.

>I think this is the right way.
I explained exactly why it isn't in the post you replied to, but it goes beyond that. Can you put your user directory on another drive or partition on Windows? No, you can't. Can you put update caches etc. on another partition on Windows? No, not really.

It's less flexible, leaves less choice to the system administrator, is extremely vulnerable to drive failures and it forces programs and scripts to *explicitly* deal with different devices while on UNIX it is literally completely opaque to programs and they don't have to care about devices at all.

I am also writing a kernel, and I see a lot of things that UNIX and POSIX just got outright wrong like ioctl(), non-file sockets etc. but the filesystem and mount points are not one of them.

It really isn't. All computing depends on the primitive operations of read and write, and you can simply memory map device registers into the file. This is in fact the correct solution to the problem that made ioctl() happen in the first place.

Reading and writing is the underlying primitive that all device communication relies upon. It only makes sense to expose it as a virtual file, rather than forcing a square peg into a round hole like ioctl().

Unironically these. Let me add that signals were a massive blunder of implementation too, even though the idea itself isn't bad.

>>everything is a file
>>except sockets, fuck sockets

but sockets are treated as files too, retard, just with a couple of additional syscalls, but at the VFS level it's just a file

He means that many sockets exist in another namespace and are unnamed. They don't have a filesystem representation at all.

if(error) perror(0);
what's wrong with errno again?

unix sockets are represented virtually in the filesystem

But we're talking about network sockets.

because you're pretentious hacks, no need to expose INET sockets to the filesystem despite being easy to do so

>But it isn't. The filesystem is pretty clearly defined. It is not by any means a clusterfuck.
I meant that when you look at your / directory half of the stuff is 'real' things on disk and half is 'virtual' things that could be devices or processes or whatever.

>Can you put your user directory on another drive or partition on Windows? No, you can't. Can you put update caches etc. on another partition on Windows? No, not really.
I don't use Windows so I'm not sure but surely you can map it using the drive letter somehow?
my solution would be to have device aliases, names mapped to either ports or device UUIDS instead of mountpoints. These aliases would make up the start of the path like PORN_USB:/traps/ and you could make a shortcut to that path and save it somewhere. By default no program should auto traverse this, meaning if I do an 'rm -rf ./' in a USB device and happen to have a shortcut to my primary partition saved on there it should not wipe out my primary partition like it would with a mountpoint.

>It really isn't. All computing depends on the primitive operations of read and write, and you can simply memory map device registers into the file.
Yes but that's a poor abstraction, dealing with device registers should be the job of the driver and not worried about when you reach this high in the OS. If you need device knowledge just to write the file then what is even the point of exposing it its not a nice abstraction. my approach would be that the driver provide a nice abstraction with simple calls that programs can make and they need not be concerned with things like device registers.

Not all of them, no. You can create unnamed sockets with no filesystem representation very easily between a parent and child for example, and no network sockets have a filesystem representation.

It's a much, much neater way to do it and it integrates nicely with UNIX's model of the computer. It'd make packet sniffers and debuggers much easier to implement too rather than having to invent 6 different tracing interfaces (ptrace() etc.) too.

>Yes but that's a poor abstraction, dealing with device registers should be the job of the driver and not worried about when you reach this high in the OS. If you need device knowledge just to write the file then what is even the point of exposing it its not a nice abstraction. my approach would be that the driver provide a nice abstraction with simple calls that programs can make and they need not be concerned with things like device registers.


goddamn you're a literal retard

Pray tell why.

how told you that you necessarily read and write to the device registers itself? you read and write according to what's exposed to you by the device driver


and how do devices provide an abstraction api in userspace in the first place you pretentious retard? something in userspace has to connect with the kernel through mmap or ioctl to provide such apis, then somebody must do it anyway, ever heard of libdrm for example?

>I meant that when you look at your / directory half of the stuff is 'real' things on disk and half is 'virtual' things that could be devices or processes or whatever.
Memory mapped files and block devices exist in /proc, /sys and /dev. There might be some under /var. and some UNIX sockets under /tmp, but that's the extent of it. I don't really see the problem with memory mapped files or block devices.

>I don't use Windows so I'm not sure but surely you can map it using the drive letter somehow?
You are explicitly forced to deal with storage devices. Relative paths don't work at all across them because they are literally different directory roots. You are forced to use absolute paths in their entirety across devices, which seriously sucks and is easy for the programmer to make mistakes with.

>By default no program should auto traverse this, meaning if I do an 'rm -rf ./' in a USB device and happen to have a shortcut to my primary partition saved on there it should not wipe out my primary partition like it would with a mountpoint.
You are basically describing a symlink. rm -rf ./ doesn't follow symlinks because symlinks require explicit dereference.

>Yes but that's a poor abstraction, dealing with device registers should be the job of the driver and not worried about when you reach this high in the OS.
The point is that it really isn't a poor abstraction, because read/write is what the device driver *does* anyway. The device registers and control interfaces should be exposed as a memory mapped files, and the driver should use I/O to these files to control the device.

The memory mapped files themselves can easily be created by init code for the device.

Oh, and one other thing that exposing device interfaces as files do and the main point of it all: It eliminates insecure, buggy hacks like ioctl() entirely.

ioctl() is one of the worst mistakes of the century.

just want to say your opinions are shit and you should kys

Files

thx ily 2 bby

>Have different 'users' for programs to run as is stupid.
>UNIX permissions are stupid.
I agree with this (Terry says this too btw). It's from a time where mainframes were relevant. Why would I have to give myself permission to do something? I'm the only one using the system, it's my fucking system.

Yes, now enjoy all your daemons and system services running as the same user as you and so when ONE of them gets compromised you are literally completely fucked.

Users and permissions are outright necessary for a networked operating system. Terry said what he said because his operating system runs in ring 0.

*sips monster*

Everyone has a personal computer nowadays gramps. People sharing computers for extended periods just doesn't happen anymore in real life.

No-one's talking about multiple human users on the same machine you dumb moron. Users and permissions are an outright necessity for operating system security, especially one that is connected to a computer network.

It's not useless at all. It is literally the fundamental security primitive in all modern operating systems. It enforces a separation of concerns, so that if one running service gets compromised it's either difficult or impossible for the attacker to gain control of the machine.

Executables should have their own permissions that are not 'users', more like on Android and shit. I should be able to run executables under my own user account but say this exe has no right to read any directory but its own and cannot access any devices etc.

>No-one's talking about multiple human users on the same machine you dumb moron.
If you truly believe that then I guess that makes you the "dumb moron", because that's exactly what it was designed for.

Designed for, sure, but it has serious security benefits which is why we keep it around.

That is sandboxing. Take a look at Flatpak

>pipe was not one of them
It’s outdated and bloated as fuck

lmao
adamdrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html

>being this deluded

By their own admission the file permissions model is janky.

Attached: justice.jpg (320x403, 28K)

Lazy brainlet here, what's ioctl() do?

The alternative is powershell cmdlets which nobody remembers

You can move your Windows user directory by using NTFS junction points, which are mountpoints for Windows. You need to boot the computer from a Windows installation media and use the recovery console, so it's not very popular with most users.
I considered it when I had Windows installed on a 128MB SSD, but eventually decided not to bother.

>cmdlets
cmdchads when?

unironically this, this led to the neglect of properly developing C

Everything should have been a stream, not a file.

Oh, and I think Thompson said something about `creat() should've been create()` or something.

>permission model
>async IO modell (cf. pyparallel)
>what said
>the idea that there is such a thing as just text or that anything like that could be reliable
>C

But we use a cluster at work

O_DIRECT

fpbp

I have programmed for linux and windows. All i can say is that I had a much easier time working with linux than with windows.

Monolithic kernels
I mean, if you're gonna design an OS entirely around a bunch of tiny little tools that each do just one thing and combine to be able to do all kinds of things, why in the fuck would you have a giant, several million line binary running as the kernel? Be consistent with your philosophy and split that shit up.

The biggest thing that personally bothers me about unix (and was copied by windows and Mac) is allowing spaces in file names. If you read a shell script you will see it has double the amount of syntax it should have with quoting and bracketing just to protect against spaces in file names.

bash/grep/awk/sed are obsolete (bash has no string processing built in and needs to use sed/grep/awk to be useful). bash should have been replaced by zsh, I can't believe people spend time learning sed/grep/awk when zsh has all the modern globbing and fuzzy search features. No one should ever write a script in bash when we have modern scripting languages like Perl, Python, Ruby, etc with regexps built in.

C is a bad language and since libc is the main interface which all languages go through to communicate with the OS (not just on unix but on all major operating systems), it makes all software inherently unsafe. null terminated strings were a good idea when memory was measured in single digit kilobytes but should have been replaced a long time ago.

It ctls the io.

they tried doing that in 80's. It's still alpha quality software.

they tried and couldnt do it and then torvalds came along with his handy working kernel

You should be able to indent makefiles with spaces.
Signals re-implement the exact breed of non-determinism that OSes are supposed to get rid of.
Other than that, UNIX is great.

Cucked and redmondpilled

Neat, me too. What platform?

x86-32 to start then x86-64 and maybe some day arm and risc-v but I probably won't.

Not everything is a file. Sockets aren't.

Theres two kinds of people in CS those that don't reinvent the wheel and are too busy changing the world and the others... we'll just say they're real special.

>Unironically these. Let me add that signals were a massive blunder of implementation too, even though the idea itself isn't bad.
Can you go into more detail about this?

I want a my own wheel for my own purposes I don't give a fuck about 'changing the world', fuck the world.

Lel I'm doing literally the exact opposite. I'm starting on ARMv7M, then planning on RV32 and ARMv7/ARMv8-A.
It's specifically for embedded stuff and it kinda does what UNIX does with files, but I use sockets instead. All IPC, timers, serial ports, etc use the socket interface.
I'm thinking I might make a custom POSIX libc for It so that you can compile and link any linux application against it pretty much transparently even though the system calls aren't POSIX. The idea is that I could take something like Ulfius and shove a proper emedded kernel under it, instead of trying to shoehorn an entire linux kernel onto a microcontroller.

Stringly typed I/O

Good read. It helps that these tools are also written in C.

>7-bit signed ASCII is retarded as Terry said.
there is a reason why he was diagnosed schizophrenic.
spewing shit like this got him the certificate of mentally unstable.

Vi over Emacs

Signals are extremely shitty in their current implementation in all Unices. When you install a signal handler you pass a function pointer to your handler, which is called whenever the appropriate signal is caught by your program.

The problem here is that there's absolutely no way to return information or pass information into the signal handler, not even a void pointer. The function definition must be:

void sig_handler(int signo);

This means you are forced to rely on global state in your program for every single signal handler. This makes it hard to make programs with signal handlers thread safe, but even worse is that signals actually aren't thread safe at all:

>The effects of signal() in a multithreaded process are unspecified.

Signals fucking SUCK in UNIX. They are really, really awful and poorly thought out and they cause numerous problems.

So your problem is that they're bare function pointers instead of some kind of closure. I get it.

Implementing your own kernel is a great learning exercise as you get a good dose of everything from low-level hardware to high-level algorithms and data structures.

You're a fucking moron if you genuinely believe what you say. ANY programmer has toy projects. An OS is simply a very big one.

I'm and I also started with i386.

My problem specifically is that it is impossible to pass data into a signal handler, or get a return code back from it. It is impractical to work with and hard to make thread safe.

Worse is that POSIX leaves behavior completely unspecified in a multithreaded program. That is a bad, bad design flaw.

why you do this?
i miss my bunny

everything is a file

stream 'o bytes

directory tree organizations (jesus)

ummm no. pipes and redirections are awesome
so are the text processing capabilities. they're very powerful


the daemon model could use some revisiting

also they way everything works in sockets (good for networking, weird for everything else)

i can't help but notice that all the things people have complained about so far were grafted onto unix by berkeley or some bumfuck unrelated team at AT&T. the same things the unix authors dislike.

like this cunt:
it makes webfags feel safe and secure

actually originally /usr was what is known as /home today

>a stream, not a file
there is no difference

>everything is a file
you'd prefer a special library function for reading from every sort of peripheral or whatever

this this this this

Unix a good boy he dindu nuffin. He need mo money fo dem programs.

Cmdlets. When will they learn?

>he doesn't know about the clusterfuck that was Mach
They tried it and it was a slow piece of shit. Though there are modern full microkernels like the one in QNX and seL4 which solve 99.99% of the performance issues in previous Mach based kernels. GNU Hurd is a shitty Mach style microkernel which is why it's going nowhere fast.

Attached: 1539742209620.jpg (2000x2000, 289K)

Non-descriptive naming

Is nobody gonna mention this?

web.mit.edu/~simsong/www/ugh.pdf

Symlinks
having root have unrestricted access to the file system.
Having multiple programs do filesystem recursive stepping (cp -r, find, etc)
Scope creep in tools that were added by ucb, and others including at&t themselves after research unix.
harmful.cat-v.org/cat-v/unix_prog_design.pdf
Pretty much all of these short comings were fixed in plan9.

Unix wasn't meant to be easy or just werk™ for the average user. It was meant to be a modular and stable environment, tools that have as few lines of code as possible and singular uses facilitated this. The idea of adding features or incrementally changing the tools is against the design principles of unix. Take for instance roff, which was replaced by troff and then groff instead of having functionality added to it.

$rape = OpenVagene | Get-Items