Are microkernels still relevant?

zirias@

Developer
This question popped up in my mind when once again looking for the current state of GNU Hurd ;) I learned there was an effort to base Hurd on L4, but that was abandoned. No final release in sight…

I kind of lost contact to operating systems research after leaving university. Back then, the leading opinion was that a microkernel is the only suitable design for a reliable kernel. But Microsoft already (kind of) failed; although they organized the NT kernel like a microkernel, they ended up running all the services in "ring-0" anyways, for performance reasons, therefore abandoning the (theoretical?) advantage. And GNU failed and continues to fail for just never releasing anything "finished". At the same time, classic "monolithic" kernels were pretty successful (like BSD, Linux, …) – and of course the not-so-real microkernel (Windows NT) was successful as well.

I am somewhat in touch with (userspace) services. Back at university, SOA was the big thing (typically using this ridiculously convoluted SOAP protocol for communication) and people thought this would solve all the problems. It created new problems. Especially dependencies: in your classic SOA with an orchestration layer, the whole system was b0rked when a single service was down. The next big thing were microservices. They are still hyped. The good thing about them was: no runtime dependencies, every service works on its own (lesson learned from SOA). But the extreme fine-grained nature means you have hundreds of them for any system doing something useful. A situation you can't really handle at the infrastructure and deployment side without tools like e.g. kubernetes. It's all developing into an over-complicated, over-engineered mess. Maybe this "micro-whatever" is somehow doomed? At work, we currently follow a "self-contained systems" architecture. It's similar to the microservices idea, but the self-contained systems are much larger. You could say: monolithic. A self-contained system covers a whole "bounded context" (look it up in DDD if you're interested). So far, I think it's a reasonable architecture, a pragmatic one.

So, back to my question: microkernels. I wonder whether there's still research going on, whether there's still the (academic) idea that they are the superior design? Or did the "monolithic" approach finally "win"? ;)
 
QNX is the only microkernel I know. It has been active since 1980. Blackberry-RIM owns it now.

QNX has send-receive-reply synchronous messaging in which the sender task is blocked until it gets a reply. I am not sure if device drivers are in the kernel.

Since microkernel systems are rare, and not open-source, there is some risk in using one.

DragonFlyBSD is doing some interesting kernel work.
 
Google's Fuschia is a potential replacement for Android system.
It's based on microkernel architecture and is being coded with Dart.
I am not sure but I think that Huawei's HarmonyOS is also said to be based on microkernel architecture, although recent announcement for the 2.0 bêta version release says that's rather based on Android open source project for the moment.
 
In the consumer space? Probably not. Here we have multiple contenders: Windows (still >80% market share on desktops and laptops), Android (which doesn't care what the kernel ist, and has a similarly high market share among handhelds), and the two Apple OSes (MacOS and iOS), both with high single-digit market shares. The only serious contender is ChromeOS, which demonstrates that kernels have stopped mattering).

In the server space (which include supercomputers)? From a market share standpoint, only Linux exists. From a revenue standpoint, zOS (=VM+MVS) and Window are still there, as are minor single-digit-percent players (AS/400, HP-UX, and various other flavors). There is absolutely no reason that Linux will be displaced in the foreseeable future.

In embedded, it's interesting. QNX exists. It turns out various L4 derivatives are heavily used (often within cellphones, for example to run the modem), in particular SEL4 for aerospace and security uses, where it has very high market share. If you count the embedded things (which are often embedded within a general-purpose computing device), there are probably more microkernels than Linux instances in the world.
 
If we had bi-directional, message passing IPC, we could probably throw DRM and OSS in userspace ourselves. This would allow for better fault tolerance and vendor driver updates between releases. Sort of like... you guessed it. ?
 
There is a performance hit when passing messages.
It surprises me that the hurd kernel does not really improve much on bare metal hardware. Nobody wants to write a hardware driver for that kernel.
 
Well, the VAX famously had 4 rings, and used all four when running VMS. That's because it was actually designed, by people who understood computers ... not hacked together (like some OSes), nor put together by gate pushers (like some CPUs). Anyway, I'm just being mean. The GE-645 machine (on which Multics ran, on which Dennis and Ken developed B and C, but I digress again) actually had 16 protection rings. Honestly, I can't remember what they were used for, and I never used Multics (way too young), and only studied it for one or two lectures in my OS class. But that architecture is still in use today, and you can still buy GCOS machines from Groupe Bull that run on it (I think the CPU is today emulated).

On your other question: If done right, message passing does NOT kill performance. L4 demonstrates that. It also makes security easy to reason about, which is why SEL4 is probably the most secure OS of all the ones in production today. My office neighbor used to do research on an air force project, and he sang the praises of SEL4. Now, am I surprised that the Gnu people can't get Hurd to either work, run efficiently, or get software developed? Not the slightest bit. Given what set of old burned out crazy theorists (RMS!) are running the Hurd project, that's completely not surprising.
 
Check out Minix3 and the restartable drivers.
Periodically killing the network driver results in FTP getting a wee bit slower, but that's it. I would happily use that in a brown-out happy place.
 
Are microkernels still relevant? Sure they are. One out of ten (or just select another number "X" here) researchers in this field of computing still talks and writes about them. In the real world, other than the examples mentioned in this thread we haven't seen any projects actually use one...
 
I like the microservices approach but get the feeling that too many compromises (and hacks) will need to be made for performance. This will get very complex and not end up achieving its original goal.

Even with our current monolithic approaches, we are going even more the other way. Things like KMS for video are a good example of this. Perhaps FUSE is the only example of a microservice that is in common use.

But back in the day everyone knew that "OOP was far too slow to ever be usable" and I suppose miracles do happen :)
 
Thanks for all answers so far. I'm a bit surprised to see that microkernels still are a thing, just not among the most successful systems in the server, desktop and mobile domain. But that's cool, cause they're interesting for sure!
 
Back
Top