Did you ever consider using a microkernel?

I read a lot about L4 microkernels recently and I found that seL4 is the only kernel that has a formal verification, which sounds pretty impressive. Somebody already ported OpenBSD to L4 many years ago, so it is not impossible to do it with BSD systems. As far as I understand from security and stability perspective it would be a big improvement to use a microkernel and there are already frameworks like L4Re and Genode, which can help to change the kernel. I was wondering if you ever considered using a microkernel and if so then why isn't worth the effort to use it instead of using a monolithic kernel?
 
 
Interesting topic but in this old one there is nothing technical.

Seems that monolithic kernel is faster and harder to develop, while micro takes more resources and is easier to develop.
 
It should be noted that Google's new touted Fuchsia operating system uses a monolith kernel.
Fuchsia is not a microkernel
Although Fuchsia applies many of the concepts popularized by microkernels, Fuchsia does not strive for minimality. For example, Fuchsia has over 170 syscalls, which is vastly more than a typical microkernel. Instead of minimality, the system architecture is guided by practical concerns about security, privacy, and performance. As a result, Fuchsia has a pragmatic, message-passing kernel.
Which are some of the same reasons FreeBSD is not a microkernel.

This question gets asked every so often here on this forum and always winds up with the same responses and links.
 
Interesting. Good to know about Fuchsia.
FreeBSD is not monolithic, at least that what says handbook:

It seems like these are quite liquid terms.
 
Seems that monolithic kernel is faster and harder to develop, while micro takes more resources and is easier to develop.
Taken at face value, each of those four statements can be proven or falsified by example.

For example: If microkernels take more resources, why are they used in deeply embedded environments, where every CPU cycle counts?
Monolithic kernels are faster, because 100% of the top500 supercomputers use Linux, a monolithic kernel.
If they are easier to develop, why haven't the companies who invest hundreds or thousands of engineers into kernels using them?
If monolithic kernels are harder to develop, why are there thousands of contributors to Linux (the kernel, not the OS as a whole)?
Each of those four statements is partially wrong, partially right. It really depends.
 
Exactly. Being able to dynamically load modules doesn't make for a microkernel. The distinction goes along the line of any job other than the bare core functionality (memory management, context switching, IPC) is implemented in independent "kernel services" that should typically run unprivileged. There are strange variants like the NT kernel that does have such services, but they all run in ring-0 nevertheless :)
 
It should be noted that Google's new touted Fuchsia operating system uses a monolith kernel.

Which are some of the same reasons FreeBSD is not a microkernel.

This question gets asked every so often here on this forum and always winds up with the same responses and links.

It's true that Fuchsia is not a microkernel, as Fuchsia is the whole Operating System. The kernel of Fuchsia is called Zircon, and is actually a microkernel, and is is derived from Little Kernel

Zircon is the core platform that powers the Fuchsia OS. Zircon is composed of a microkernel as well as a small set of userspace services, drivers, and libraries in /zircon/system necessary for the system to boot, talk to hardware, load userspace processes and run them, etc. Fuchsia builds a much larger OS on top of this foundation.
 
It's true that Fuchsia is not a microkernel, as Fuchsia is the whole Operating System. The kernel of Fuchsia is called Zircon, and is actually a microkernel, and is is derived from Little Kernel
They explicitly state it's NOT a microkernel. They've used the core concepts of a microkernel but added a lot of non-microkernel stuff to it. So it probably falls in the hybrid category now. The Windows NT and MacOS kernels are hybrids too.

 
They explicitly state it's NOT a microkernel. They've used the core concepts of a microkernel but added a lot of non-microkernel stuff to it. So it probably falls in the hybrid category now. The Windows NT and MacOS kernels are hybrids too.


Thanks for correcting me SirDice, but glad we could agreed that Zircon is not a monolithic kernel either :)

If anyone would like to play with a micriokernel *NIX, I could suggest Minix or Redox. Only tried the former myself.
 
Taken at face value, each of those four statements can be proven or falsified by example.

For example: If microkernels take more resources, why are they used in deeply embedded environments, where every CPU cycle counts?
Monolithic kernels are faster, because 100% of the top500 supercomputers use Linux, a monolithic kernel.
If they are easier to develop, why haven't the companies who invest hundreds or thousands of engineers into kernels using them?
If monolithic kernels are harder to develop, why are there thousands of contributors to Linux (the kernel, not the OS as a whole)?
Each of those four statements is partially wrong, partially right. It really depends.

Afaik they require different mindsets from application or service developers. By a microkernel you try to have the least number of IPCs if you want to have fast code. So you can optimize code for them differently. I did not see any benchmark which compares L4 with other microkernels or monolith. I don't even know if this is possible reliably. Maybe on small applications that have different ports optimized both ways. So it is hard to tell anything about speed without measurements. All I found old benchmarks between Minix and Linux, but Minix have a custom microkernel and I had the impression that this microkernel is a lot slower than L4 and it looked like Minix developers don't really care about speed. So I don't think we can draw any conclusion based on the Minix benchmarks. Security and reliability is certainly better with L4 than with any monolith kernel.
 
Well I think I got what I was afraid of. I mean we just talk about it without really considering or trying it and got to the conclusion about why L4 is superior or inferior to the current Unix kernel.
 
Huh? What do you want to try? If it is using L4, go ahead :) If you're talking about refactoring the FreeBSD kernel into a microkernel, won't happen. The simple reason is: it's not worth the effort. Yes, a microkernel is clearly the "better" design in several aspects (stability, maintainability, etc). But changing the design in an existing OS would basically mean to rewrite the kernel from scratch, and that will never "pay off". Microsoft would have better chances here, as the NT kernel already is a microkernel by design, although it's a "hybrid" kernel in practice, as all the services are running in ring-0 as well -- and with Microsoft's strategy to never ever break backwards compatibility, this is very unlikely to ever change as well.

At least, L4 exists and seems to work. So for an actually useful OS with a "true" microkernel, this could be the best starting point...
 
Well I think I got what I was afraid of. I mean we just talk about it without really considering or trying it
Trying what ? L4 alone ?
The question in the thread sound (for me) «why FreeBSD is FreeBSD».
FreeBSD has not a micro kernel. That a fact. So trying a micro kernel means trying an another OS, no ?
And in a official FreeBSD user forum, what do you expect ?

I want to point your attention about Windows NT and Mac OS X. Two system that try the micro kernel architecture and finish with hybrid. I have no doubt about their competences. So for me, a pure micro kernel in an OS seems to have some (maybe invisible in paper) pains. No ?
 
Well I think I got what I was afraid of. I mean we just talk about it without really considering or trying it.

Upon reflection, I can see my post (referring to yeast) was fairly cryptic for anyone who is unaware. Basically we are all probably running a micro kernel alongside FreeBSD. Does that constitute trying it?

https://www.zdnet.com/article/minix-intels-hidden-in-chip-operating-system/

As far as its benefits.. yes, it appears to work. Gives Intel lots of control over my digital life ;).
 
Upon reflection, I can see my post (referring to yeast) was fairly cryptic for anyone who is unaware. Basically we are all probably running a micro kernel alongside FreeBSD. Does that constitute trying it?

https://www.zdnet.com/article/minix-intels-hidden-in-chip-operating-system/

As far as its benefits.. yes, it appears to work. Gives Intel lots of control over my digital life ;).
Well most of the embedded devices contain microkernels too, e.g. mobile phones for GSM communication, printers, etc. I am not sure how the Minix kernel looks like, but afaik. the first version 30 years ago had very serious performance issues and I had the impression last time I talked to them, that those issues were never fixed and they are working mostly on software support, not on the kernel. I always thought that their approach is wrong, because one needs to optimize softwares and drivers differently for a microkernel than for a monolithic one, so simply adding a Linux compatibility layer and running Linux applications would make everything slow, but I just read that ESXi has a microkernel too (according to them https://web.archive.org/web/20090702000340/http://www.vmware.com/company/news/releases/64bit.html ) and I doubt they have performance issues, so probably it is just that Minix has a slow kernel. I am not entirely certain though if ESXi's VMkernel is a microkernel, because it is closed source and every diagram I see about it is different and most of them look monolithic. Probably ppl who draw these have no idea about the actual architecture.
 
Back
Top