Kernels

Let's talk kernels, Jow Forums!
So let's start this off with a common argument i've seen from BSDfags and some others. "The Linux Kernel is bloated!" You can see this sentiment here and here But honestly, I don't see how that can be possible. I'll give ya the coreutils. Those should definitely be reworked, or better yet, fully replaced by Busybox. But the kernel HAS to be as big as it is in order to have driver support for all the hardware that it can run on. If it was smaller, it wouldn't be able to have the amazing level of compatibility that it does, far superior to *BSD, and only beaten by Windows due to manufacturers being kikes.
I mean, if we're talking about microkernels, that's a bit different because you'd want the kernel to be as small as possible and all the drivers to be in userland, but for any monolithic kernel, it makes no sense to start crying bloat when there's a perfectly acceptable reason for it.

Attached: os2.jpg (711x387, 33K)

Other urls found in this thread:

wiki.minix3.org/doku.php?id=www:documentation:reliability
genode.org/about/index
genode.org/documentation/components
twitter.com/SFWRedditGifs

The Linux kernel is bloated. It's too fucking huge and is a monolith.

Will agree on the monolith part. We really need a good microkernel OS. Hopefully Genode keeps developing. It looks promising.

Checked out Redox-OS? Pretty fast dev.

Yeah I checked it out. I'm surprised they actually have a working GUI, albeit a simple one.
Some of the shit they mention in the book didn't work though. Like they talked about having a vim-like text editor, but I couldn't get it to run. Also, the default shell is quite heavy, even bigger than zsh. Not the biggest problem, but it triggers my autism.

You could be putting drivers in userspace (microkernels) which would significantly reduce kernel size. But also the linux kernel contains shitloads of legacy code, as well as code that only works on antique devices. This is the bloat part of linux. Not to mention the broken/buggy/dead code and bloated versions of algorithms that many trash-tier pajeets contributed of course.

Hybrid kernel master race reporting in!

Attached: 2000px-OS-structure2.svg.png (2000x511, 76K)

Just seems like half-assing it

linux kernel is quite small if you ignore them drivers

They should separate everything.

Even a kernel with extremely tight FP design and the scope of Linux is possible in theory.

It doesn't get more "micro" than everything being tightly functional and only calling the functions it needs.Would probably even be faster.

I don't see anyone who could finance or make such a thing, though. Getting everything right currently seems to contradict getting anything done, as far as software is concerned. The standard libraries, compilers and people don't support producing something anywhere with near some kind of provable minimalist exactness

Why?

>In monolithic operating systems, device drivers reside in the kernel. This means that when a new peripheral is installed, unknown, untrusted code is inserted in the kernel. A single bad line of code in a driver can bring down the system. This design is fundamentally flawed. In microkernel operating systems, each device driver is a separate user-mode process. Drivers cannot execute privileged instructions, change the page tables, perform I/O, or write to absolute memory. They have to make kernel calls for these services and the kernel checks each call for authority.
>In monolithic operating systems, a driver can write to any word of memory and thus accidentally trash user programs. In microkernel operating systems, when a user expects data from, for example, the file system, it builds a descriptor telling who has access and at what addresses. It then passes an index to this descriptor to the file system, which may pass it to a driver. The file system or driver then asks the kernel to write via the descriptor, making it impossible for them to write to addresses outside the buffer.
adapted from wiki.minix3.org/doku.php?id=www:documentation:reliability

How do I make a kernel, that will work with linux (or should I say Gaynoo) stuff, but not linux.

i dont get kernels: the post

If you just look at the pure theory then microkernels are totally superior and the obvious choice. In reality you very quickly run into a whole lot of issues with communication between the various parts. There's a reason why microkernels are stuck on the drawing board.

The Linux kernel works and it works very well. In practice monolithic kernels do work and they work great. There is a reason the Linux kernel is the most widely deployed kernel in the world. I know it's not popular on the PC desktop but it has conquered and dominates every other area of computing.

>There's a reason why microkernels are stuck on the drawing board.
the only one stuck there is GNU herd, and that's because GNU can't into anything complex. Also one of the most used kernel is Minix, because it is inside the IME, and probably some other stuff given it has a BSD license.
The problem with microkernel on consumer hardware is that every driver process has to do a system call every time they need to touch IO, and usual driver architecture and usage assumes transparent access to devices, and every system call result in an expensive context switch. You don't want a graphic driver to make a system call for each command it want to send to the graphic driver. Or just imagine the latency for audio driver.
While I am not an expert, I think part of the problem lies in UNIX system calls, which are not made to handle this problem.
Another solution would be to actually use Ring1/2 on the x86 platform, which were intended by intel for actual driver code. In this way, one could theoretically have the kernel still separated from the drivers, so a driver going haywire would still be caught by page faults, but the driver would be able to do IO without assistance from the kernel, while maintaining whatever API it needs for best performance with stuff running in Ring 0. Alas, for compatibility concerns it never caught on.

Attached: 1321864156.png (700x700, 21K)

The Unix principle of design is that a system is made up of small programs that do one thing and do them well.

Alan Kay invented Smalltalk. It was the first OO language. He envisioned a system where one could create many small computers that passed messages to each other.

These things are the same. Today we call it a service oriented architecture or microservices. It is simpler to update and change an independent component versus changing the code in one place and then tracking down every place that's affected by the change.

Code needs to be optimized for change. Things change. Needs change. With a monolithic program this is orders of magnitude more difficult.

Any decent programmer can read 2k SLOC and understand what the program does and how it works. With 20k or more no one can do this.

Monolithic kernels will be left behind as the world moves to microservices. Microkernels are a step in the right direction but they'll eventually be taken over by systems similar to The Hurd. The Hurd languishes because there is no money behind it. There's no money behind it because of freetard licensing.

The market isn't ready for this change and people will not invest in this change.

So the businesses that don't adapt will be like the banks that never updated from Cobol. They now have to pay a handful of programmers well in excess of 150k a year to port the code over portion by portion.

Windows and Apple will adapt and have their distributed systems ready. Ever heard of Azure? Microsoft is investing in research like crazy. Check out barrelfish for a monster of an operating system that would leave Linux in the dust if it ever went into production.

All the flavors of Unix and Linux are obsolete but we use them because they're good enough. They work but their not ideal for a world with massively heterogenous hardware.

The future is concurrent. The future is multi-user, multi-architecture, and multi-device. Linux can't deliver this future. It was a noble cause.

You have to be retarded to not think it's fucking massive. I can run the same install with a BSD kernel that takes up 1/5th the space in RAM that provides the same functionality.

Continued

Their are concurrency models that eliminate race conditions and resource starvation. The actor model comes to mind. So does flow based programming.

For communication between components, capability-based security a-la smalltalk style message passing also comes to mind.

The hardware will need to change to a certain degree but not by much.

All we need are the people willing to build it and port it to x86, amd64, arm32, arm64, risc, and maybe even mips (for free).

Did I mention that we would need a better systems programming language? Something like Pony would've been perfect but the brilliant minds behind it are now gone and the project put up some kind of sjw code of conduct.

A lot of work needs to be done and there is no money behind it but if the open source community could build a system like this one it would bring the attention of the market back to open source.

Tannenbaum is off his rocker though. Microkernels work best in certain situations but are complete shit in others. In embedded, RTOS applications, Microkernels can shine a lot but not if optimizing for thoroughput between large number of devices and massive number of threads. I think a lot of core Linux work is pretty outdated (especially in security and jailing) but microkernels alone cannot solve this. Fuchsia is a step in the right direction though.


Hurd is not going to make it, in almost every regard it takes the worst parts of Linux and puts it on a very high latency multiserver system.

Yes, with gaping vulns, low graphics acceleration and multithreading performance, and no good kernel profiling.

>multi-user
Are we talking logical users or physical users here? The trend for the latter looks to me more single-user.

desu the first step is probably to start by writing a new "portable assembler" in the style of C, but without every single part of its horrendously garbage design. Something extremely simple to implement, so probably based on scheme, but with low-level access and operations and not particularly higher-level constructs.

>I can run the same install with a BSD kernel that takes up 1/5th the space in RAM that provides the same functionality.
and about 1/5th the hardware support. the fuck is your point?

While I don't particularly appreciate the blatant Microsoft shilling, I am with you that monolithic designs should be abandoned. I'll look into barrelfish some more, but I'm glad they at least had the decency to MIT license it.

To you, I would suggest checking out the Genode framework. This is an OS framework that appears to provide some common components and drivers that can be used across multiple kernels, and it's all very modular. That's right, this project would make even the kernel just another interchangeable component. Here's some info from their site:

Features
CPU architectures: x86 (32 and 64 bit), ARM, RISC-V
Kernels: most members of the L4 family (NOVA, seL4, Fiasco.OC, OKL4 v2.1, L4ka::Pistachio, L4/Fiasco), Linux, the Muen separation kernel, and a custom kernel.
Virtualization: VirtualBox (on NOVA and Muen), L4Linux (on Fiasco.OC), and a custom runtime for Unix software
Over 100 ready-to-use components

genode.org/about/index
genode.org/documentation/components

This to me seems like the height of this idea for the future of operating systems, and builds off a lot of the ideas you describe, particularly the stuff about optimizing for change, lowering SLOC, and being able to adapt.
Is the kernel based on a lot of outdated tech? Just replace it with a newer one! Same could very likely go for the rest of the components.

Linux is fine for now, and will likely continue to be successful for some time, but in the longer term, there needs to be change, a move towards this concurrent future.

Rust?

Attached: rust-logo-512x512.png (512x512, 84K)

Rust is a good try but has several critical issues, both meta and technical. On the meta side, it changes too much and too often to be usable for anything for the time being. On the technical, the borrow checker gets in the way to the point that in non-trivial programs, you will invariably have to choose between rewriting your large program from scratch to add the next feature without making the program impossible to compile due to the borrow checker, or use unsafe {}.

Oh I know Hurd will fail. I just used it as an example.

I'm talking personal cloud. Multiple users with SSO on federated deployments. You lock you computer and someone else can use their profile without interrupting you. I won't go too deeply into details as their are multiple ways to achieve this.

I want shilling for Microsoft. I'm a technical consultant that supports Microsoft products as well as providing my skills as a developer.

I love open source. I use Xubuntu at home. I love that my software doesn't spy on me.

But when you see all that Microsoft (and other businesses) offer its customers you begin to wonder why the open source world hasn't been able to compete.

Its a possible of money, organization, and technical ability.

The future won't be built as a "labor of love". It will be built by those wanting to make money. That's just basic economics.

Think about Active Directory. AD is crucial to enterprise. Where is the open source equivalent?

If the FLOSS community could build it they then wouldn't have the man power to support it.

Business drives innovation. Hopes and dreams are beautiful but if you're not manning someone money, saving someone money, or providing a good or service then no one will give a fuck about you.

They have only a few questions that most programmers can't answer:

Why should my organization spend money on it?

What benefits will it bring me?

How much will it cost to support versus the current contracts we have?

How long will it take to retrain employees?

And if the numbers don't add up then the business man will say fuck your shit. not interested.

>actual conversation because /v/irgins and wincucks have nothing to say
Welcome back Jow Forums, I missed you

Attached: insomnia.gif (875x700, 45K)

>reddit spacing
>M$ shill
>doesn't even know the first thing about technology
0.02 rupees were deposited in your account pajeet. Kindly do the needful.

sorry, didn't mean to come off too accusatory. Your third to last paragraph just came off a lot like the Bill Gates meme poster that was in /mg/ a while ago.
I can agree with a lot of what you say here, which is why even though I'm not the biggest fan of everything they do, I appreciate companies like RedHat and SUSE that provide a corporate and business-centric side to Free Software

I do hope you check out Genode though. Its support for seL4 is also a very good thing, as I feel that project's formal verification stuff needs to be more widely used.
Looking into Barrelfish, it seems like a pretty radical design. Completely foreign to anything else out there. Perhaps something like that is what's needed though, but only time will tell. From a few articles, it does appear that this is solely a research project, and is not the true new direction and future for MS. Regardless, it's quite interesting.

Agreed. This is a comfy thread

kekd

>it makes no sense to start crying bloat when there's a perfectly acceptable reason for it.
I take it you're never actually tried to read any of this "amazing" driver code yourself. Calling that pajeet tier code "bloat" is putting it /very/ generously. If you think the gnu coreutils are bad, just wait til you start diving into bluetooth drivers and shit. "It compiles and werks 4 me!" is a lazy ass excuse for that nonsense.

>with gaping vulns
orly? go ahead and find a "gaping" vulnerability in the openbsd kernel and submit it to the dev mailing list, let's see how well that goes for ya.