At this time I'm collecting replies.
And all these replies tell you exactly the same thing: No, Linux doesn't need any "fixing" (neither does the FreeBSD kernel). Still you insist otherwise. What's the point of this? If you want to make a valid point, you have to learn the theory first.
The way to make software robust, reliable, maintainable and changeable is to give it a sane structure. That's true for every piece of software, including an OS kernel. There are a lot of design principles out there giving guidance how to do it. E.g. a well known collection is
SOLID. Today's "monolithic" kernels aren't "
big balls of mud" as they sometimes were many decades ago, they are "modular monoliths", and work is often going on to fix inner design issues (like e.g. eliminating some
global locks both in Linux and FreeBSD kernel).
The microkernel architecture
enforces a structure with clear responsibilities and boundaries, therefore it is
in theory the superior architecture. In practice, what a microkernel architecture could give you today is reducing the
impact of bugs: Anything not affecting the microkernel itself won't bring down the whole system. That's of course only true if
hardware features are used to enforce boundaries at runtime (which, again, wasn't done in AmigaOS, partially for lacking hardware support)
and the affected kernel service can just "respawn" without further issues. Given today's monolithic kernels
have inner structure, it just makes no sense at all to attempt a full rewrite (which would be necessary) to achieve just this theoretical advantage.
BTW, providing a "unixy" kernel in microkernel architecture is exactly what
GNU Hurd is attempting for almost 34 years now ....