Why is Linux monolithic?

Why is Linux monolithic?

Attached: 1531860771483.png (1280x945, 1.33M)

Other urls found in this thread:

en.wikipedia.org/wiki/Tanenbaum–Torvalds_debate
twitter.com/AnonBabble

Because that swedecuck Linus didn't know better.

I know there's an famous email battle between Linus and his OS professor/creator of minix, but I am to lazy to look through it. What were Linus's arguments?

en.wikipedia.org/wiki/Tanenbaum–Torvalds_debate
>Since the criticism was posted in a public newsgroup, Torvalds was able to respond to it directly. He did so a day later [...] acknowledging that he finds the microkernel kernel design to be superior "from a theoretical and aesthetical" point of view.[3] He also claimed that since he was developing the Linux kernel in his spare time and giving it away for free (Tanenbaum's MINIX was not free at that time), Tanenbaum should not object to his efforts.

Linus was young and dumb and it's too late to change now.

System d added 2 million lines and growing of unaudited alphabet agency code

Because it works.

On paper microkernels are cute. They allow a strict separation of modules and they are secure and they are very cute. Many a computer scientist would date one - many a computer scientist dreams of that perfect task separation and would snuggle one every night.

But microkernels don't work.

Oh, they can, sometimes, in some small capacity. But microkernels are slow, and when they aren't slow they are very hard to code for, and when they aren't very hard to code for they are labyrinthine and buggy. Somewhere along the way, someone will ask themselves: what if we throw that pesky and stupid memory mapping layer which causes retarded bugs for a far simple mapping? And do we really, really need to pass through a service API to allocate memory? Wouldn't it be simpler, and less likely to cause bug if we did that and that?

Good job, you invented the monolithic modular kernel ala Linux.

For now, microkernels are a pipe dream, removed from production. Sure, some of them are used in embedded systems: QNX isn't half bad. Some of them are used as fun learning tools, like MINIX. But they are always kept small, to the point, and rarely implement advanced features.

While most of your post is correct, what do you think of SeL4? It's been formally verified, it's fast, and has full POSIX, even if it's only really good for embedded systems at the moment.

microkernels were a 70s meme that never lived up to the hype

>microkernels are slow

That's why the concept of hybrid kernel exists. If something is too slow to run in userspace, pull it into the kernel.
NT and XNU do this.

Because functional programming is hard and thus nobody does kernels with it yet.

Likewise for microkernels that aren't even purely functional. Still hard.

nobody does kernels with FP because it's slow and full of abstraction

>microkernels are slow
Did I just take a time machine back to the 70s/80s?
Microkernels haven't been slow for a while, gramps. as suggests, we live in a post-L4 world, where microkernels are extremely secure and much faster than they were in the mach days. Unfortunately nobody recognizes this, as the FUD against the microkernel design has become so overwhelmingly strong that everyone just assumes that they're in the same state they always were.

Google is currently developing a new OS, and guess what? It uses a microkernel. Pic related

Attached: google-fuchsia8-1270x714.png (1270x714, 388K)

That is not a requirement of FP.

And the binary compiled bits could be exactly the same as optimal ASM with a fancy FP compiler.

Just as much of a fantasy as a micro/FP kernel in reality though, it's too difficult and not interesting enough to finance.

More info:
>Fuchsia is a capability-based operating system currently being developed by Google. It first became known to the public when the project appeared on GitHub in August 2016 without any official announcement. In contrast to prior Google-developed operating systems such as Chrome OS and Android, which are based on Linux kernels, Fuchsia is based on a new microkernel called "Zircon".

>Upon inspection, media outlets noted that the code post on GitHub suggested Fuchsia's capability to run on universal devices, from embedded systems to smartphones, tablets and personal computers. In May 2017, Fuchsia was updated with a user interface, along with a developer writing that the project was not a "dumping ground of a dead thing", prompting media speculation about Google's intentions with the operating system, including the possibility of it replacing Android.

>Fuchsia's user interface and apps are written with Flutter, a software development kit allowing cross-platform development abilities for Fuchsia, Android and iOS. Flutter produces apps based on Dart, offering apps with high performance that run at 120 frames per second. Flutter also offers a Vulkan-based graphics rendering engine called "Escher", with specific support for "Volumetric soft shadows", an element that Ars Technica wrote "seems custom-built to run Google's shadow-heavy 'Material Design' interface guidelines".

Ever heard of Minix? Redox? BlackBerry OS?

Microkernels arne't some pipe dream. They work.

Ask your teacher, not Jow Forums, kid.

Isn't windows NT a microkernel?

no buly, gramps

Attached: 1420126576083.jpg (720x720, 61K)

microkernels do work, they just never lived up to hype

See . Read the post you are answering to before answering it like a Pavlovian dog.
>But microkernels are slow, and when they aren't slow they are very hard to code for, and when they aren't very hard to code for they are labyrinthine and buggy.
>Sure, some of them are used in embedded systems: QNX isn't half bad. Some of them are used as fun learning tools, like MINIX. But they are always kept small, to the point, and rarely implement advanced features.

I'd say that L4 is firmly into the small, kept to the point, relatively difficult to code for, embedded system category.

The tenacious following microkernels have is as astonishing as it is absurd, something out of a Monty Python sketch. Microkernels were "solved" in the 70's. They were "solved" in the 80's. They were definitely "solved" in the 90's, believe us, MINIX is the future. They were completely definitely "solved" in the 2000's, believe us, they are fast now, L4 is here. They were completely solved, definitely, 100% solved in the 2010's.

Now believe us, it is 2018, and they are completely, absolutely, 100%, don't believe the 70's articles, completely, fast free and super fast and super cool, solved.

You're begging for it. You just started CS and asking questions about a lecture you just had. Ask your teacher.

>You just started CS
Are you refering to CS as a career path? You can't possibly think OS is an intro course. I tend to ask questinos on theory and the engeneering, not history.

Undergrad, check

based shiro cat

Your processor is running one right now and your regular OS kernel doesn't even know it.

cute is such an apt way of putting it, nice

It is a lovecraftian interdmiensional hybrid kernel.

what that mouth do tho?