zirias@
Developer
This question popped up in my mind when once again looking for the current state of GNU Hurd
I learned there was an effort to base Hurd on L4, but that was abandoned. No final release in sight…
I kind of lost contact to operating systems research after leaving university. Back then, the leading opinion was that a microkernel is the only suitable design for a reliable kernel. But Microsoft already (kind of) failed; although they organized the NT kernel like a microkernel, they ended up running all the services in "ring-0" anyways, for performance reasons, therefore abandoning the (theoretical?) advantage. And GNU failed and continues to fail for just never releasing anything "finished". At the same time, classic "monolithic" kernels were pretty successful (like BSD, Linux, …) – and of course the not-so-real microkernel (Windows NT) was successful as well.
I am somewhat in touch with (userspace) services. Back at university, SOA was the big thing (typically using this ridiculously convoluted SOAP protocol for communication) and people thought this would solve all the problems. It created new problems. Especially dependencies: in your classic SOA with an orchestration layer, the whole system was b0rked when a single service was down. The next big thing were microservices. They are still hyped. The good thing about them was: no runtime dependencies, every service works on its own (lesson learned from SOA). But the extreme fine-grained nature means you have hundreds of them for any system doing something useful. A situation you can't really handle at the infrastructure and deployment side without tools like e.g. kubernetes. It's all developing into an over-complicated, over-engineered mess. Maybe this "micro-whatever" is somehow doomed? At work, we currently follow a "self-contained systems" architecture. It's similar to the microservices idea, but the self-contained systems are much larger. You could say: monolithic. A self-contained system covers a whole "bounded context" (look it up in DDD if you're interested). So far, I think it's a reasonable architecture, a pragmatic one.
So, back to my question: microkernels. I wonder whether there's still research going on, whether there's still the (academic) idea that they are the superior design? Or did the "monolithic" approach finally "win"?

I kind of lost contact to operating systems research after leaving university. Back then, the leading opinion was that a microkernel is the only suitable design for a reliable kernel. But Microsoft already (kind of) failed; although they organized the NT kernel like a microkernel, they ended up running all the services in "ring-0" anyways, for performance reasons, therefore abandoning the (theoretical?) advantage. And GNU failed and continues to fail for just never releasing anything "finished". At the same time, classic "monolithic" kernels were pretty successful (like BSD, Linux, …) – and of course the not-so-real microkernel (Windows NT) was successful as well.
I am somewhat in touch with (userspace) services. Back at university, SOA was the big thing (typically using this ridiculously convoluted SOAP protocol for communication) and people thought this would solve all the problems. It created new problems. Especially dependencies: in your classic SOA with an orchestration layer, the whole system was b0rked when a single service was down. The next big thing were microservices. They are still hyped. The good thing about them was: no runtime dependencies, every service works on its own (lesson learned from SOA). But the extreme fine-grained nature means you have hundreds of them for any system doing something useful. A situation you can't really handle at the infrastructure and deployment side without tools like e.g. kubernetes. It's all developing into an over-complicated, over-engineered mess. Maybe this "micro-whatever" is somehow doomed? At work, we currently follow a "self-contained systems" architecture. It's similar to the microservices idea, but the self-contained systems are much larger. You could say: monolithic. A self-contained system covers a whole "bounded context" (look it up in DDD if you're interested). So far, I think it's a reasonable architecture, a pragmatic one.
So, back to my question: microkernels. I wonder whether there's still research going on, whether there's still the (academic) idea that they are the superior design? Or did the "monolithic" approach finally "win"?
