What about using the L4Re microkernel to boot FreeBSD on a mobile device (or whatever else kind of device) ?

Yes,a psychologist that don't have the definitive answers because there aren't. Instead,there are different interpretations of the social phenomenons and all of them are important and should be taken in consideration.
 
I've read the "Tanenbaum-Torvalds Debate" and I've found the argumentation of Peter da silva very interesting :



His words made me think and raised a question in me: "if the microkernel architecture of the Amiga OS was / is so good, why wasn't it as successful as that of Linux ? In fact even today the Amiga OS is also if technically more advanced than Linux (and than FreeBSD,that is also own a macrokernel), but it has become a niche system, Linux is everywhere. So basically, are we all using a system that has technically been outdated for at least 50 years ? Are we masochists ?
Well, which is exactly what I printed above my line.

You've got to kill your own delusion that microkernel is equal to always better. Also that statement is about the kind of state Linux was in 1992, which was much less developed than nowadays where it runs supercomputers.

Micro and macro kernel is more like benzin and diesel engines: both can be used to power a car or something else, and have their pros and cons.
 
Well, which is exactly what I printed above my line.

You've got to kill your own delusion that microkernel is equal to always better. Also that statement is about the kind of state Linux was in 1992, which was much less developed than nowadays where it runs supercomputers.

Micro and macro kernel is more like benzin and diesel engines: both can be used to power a car or something else, and have their pros and cons.

I still think that the microkernel is technically superior to macrokernel. The reasons why they are so relegated to some specific areas can be a lot. The first one is that developers have always adhered to the K.I.S.S principle. Keep it simple and stupid (as long as it is able to satisfy the needs of the developers who created it ; and Andy wasn't between them). He was a professor and he wanted to make the best product possible with the technologies that there were at the moment. In the 90s there was the need to save money on Unix licenses and the desire to feel part of the change, all together (the internet was being born in that period : (I would like to point out that these motivations have nothing to do with the creation of the best product possible,for me they can be defined as collateral reasons). Andy didn't care much about these factors. I believe that from Linus side there was also a rush to make the system usable as soon as possible, because he understood that the world of the OS developers was in turmoil and if he hadn't created Linux asap, someone else would have done it. He thus preferred to adopt the simplest and therefore fastest architecture that would have allowed him to give out a good enough product in a reasonable amount of time rather than drag it out with the risk that it would not be useful to anyone because it would have came too late. I don't think that one day the macrokernel oses will be supplanted by the microkernels ones,just because the first ones are good enough for every user case,even commercials. Even from a technical point of view, the human brain is more similar to the functioning of a micro kernel. And I think we all agree that the brain represents the architecture to copy from when designing an operating system, because... it works, because it is powerful, efficient and because it represents an incredible source of information that we can draw on. Furthermore, the example of the Amiga OS tells us that it was possible to build a microkernel system in a reasonable amount of time. And if it wasn't happened has been because at the time there were some specific needs (such as the drive to save money, as well as that given by a strong desire to collaborate),of relatively few developers that worked on some specific hardwares, that have made the fastest and more convenient choice...for them. If the social motivations had been different, I don't see why all those great and talented developers couldn't have built a microkernel OS. A lot of talent was there. As well as they had good examples of microkernel OSes and papers to study.
 
AmigaOS is a microkernel message-passing design, [...]

Quote from Peter Da Silva back from the legendary Linus Torvalds/Tanenbaum debate.
Those are interesting statements. It sounded to me like a somewhat similar situation as in the later Windows-NT kernel: The architecture and design is "microkernel", but it can't really use the biggest advantage without hardware enforcing the separation ... and while this was a deliberate design decision in Windows-NT for performance reasons, it was at least partially due to hardware limitations of the platform in AmigaOS. Trying to verify my assumption, I found there's a nice Wikipedia article: https://en.wikipedia.org/wiki/Exec_(Amiga).

Hehe, indeed:
Other comparable microkernels have had performance problems because of the need to copy messages between address spaces. Since the Amiga has only one address space, Exec message passing is quite efficient.

Seems it didn't even use the one feature the hardware would have provided:
Unlike newer modern operating systems, the exec kernel does not run "privileged".
This might have been for performance reasons.

At least, on a classic Amiga, the core (non-GUI) parts of the OS ran from ROM, so it was unlikely some faulty program could accidentally kill the kernel (it would have to actively disable scheduling or accidentally trigger bank switching). That's probably the reason an Amiga could at least almost always offer an RS-232 commandline debugger on crashes :cool:

All in all, I see how the architecture is indeed that of a microkernel here, but without offering the stability you'd (at least nowadays) expect from such a design....
 
I still think that the microkernel is technically superior to macrokernel.
In real life most of the time it doesn't matter if something is technically superior, but which type of system gets more adoption.

For example during the VCR wars Sony Betamax was technically superior over JVC's VHS, Philips' Video 2000 even more so; but as we all do know in the end VHS won the race.

And it's also wrong that developers always stick to KISS, otherwise the ghastly abomination called Electron would have never ever existed. Amongst other things.

And I think we all agree that the brain represents the architecture to copy from when designing an operating system

I cannot agree with that statement. Our Brain and its biological hardware is what dictates the architecture of computer peripherals, but that's it.

And as already said, there ARE microkernel OSes around. Actually more than enough. They are though more widespread in embedded systems, because on the desktop computer their advantages really don't matter so much that they would outweigh their own set of problems. Or to view from the other angle their potential benefits are not big enough compared to established kernels to start working on them.

And again - if you want a microkernel on the desktop, take MINIX3. It comes with NetBSD userland. This has been around for 11 years by now, or so.
 
Those are interesting statements. It sounded to me like a somewhat similar situation as in the later Windows-NT kernel: The architecture and design is "microkernel", but it can't really use the biggest advantage without hardware enforcing the separation ... and while this was a deliberate design decision in Windows-NT for performance reasons, it was at least partially due to hardware limitations of the platform in AmigaOS. Trying to verify my assumption, I found there's a nice Wikipedia article: https://en.wikipedia.org/wiki/Exec_(Amiga).

Hehe, indeed:


Seems it didn't even use the one feature the hardware would have provided:

This might have been for performance reasons.

At least, on a classic Amiga, the core (non-GUI) parts of the OS ran from ROM, so it was unlikely some faulty program could accidentally kill the kernel (it would have to actively disable scheduling or accidentally trigger bank switching). That's probably the reason an Amiga could at least almost always offer an RS-232 commandline debugger on crashes :cool:

All in all, I see how the architecture is indeed that of a microkernel here, but without offering the stability you'd (at least nowadays) expect from such a design....

Please compare these two situations :

---> Exec is the kernel of AmigaOS. It is a 13 KB multitasking microkernel
---> https://www.crn.com/news/applications-os/220100662/torvalds-calls-linux-kernel-huge-and-bloated
---> https://news.ycombinator.com/item?id=10813338

and now tell me if the huge and bloated Linux kernel makes you think or not that Andy was in some way right when he said that using a single macro C routine OS in the '90s' was wrong. He didn't talk specifically about the fact that Linux would become bloated,but he knew,as well as Torvalds and others,that this would happened. But they didn't talk about this,I think,for opposite reasons. But the point is always the same. The microkernel architecture solves that problem. And maybe Linus soon should port Linux to be a microkernel,because I suspect that there isn't an easy and different way to streamline it. For how long time Linux can continue to grow,before to becomes no debuggable anymore ?
 
All a microkernel does is basic process control (scheduling) and message passing. No subsystems, no device drivers, no filesystems, no I/O-scheduling, no abstractions, .... just NOTHING that would enable any application to do something meaningful. It needs a large zoo of "kernel services" on top to provide all this required functionality.

I certainly don't need any more links to know how and why this is the "better" design from an architecture point of view. Coincidentally, Jochen Liedtke taught at my alma mater.

I can also tell you you don't "port" a "modular monolith" like Linux to be a microkernel + services. That would be a full rewrite. There is something to gain, theoretically, but the amount of work (talking about lots of person-years) is no sensible relation.

What can be done (and, was done with L4Linux) is to make Linux only slightly modified to run on top of a microkernel, but that's not exactly the real microkernel architecture.
 
I find also interesting that the best microkernel architect is also a professor in neuroscience (psychology + biology). As I told before,its important to study the human brain to gather all the information that you will use as a solid foundations for developing good OSes. Because what's an OS ? Basically its a little brain. This is what Carl Sassenrath did in his career,the developer of the Amiga microkernel :

Carl Sassenrath (born 1957 in California) is an architect of operating systems and computer languages. He brought multitasking to personal computers in 1985 with the creation of the Amiga Computer operating system kernel.

In 1980 Sassenrath graduated from the University of California, Davis with a B.S. in EECS (electrical engineering and computer science). During his studies he became interested in operating systems, parallel processing, programming languages, and neurophysiology. He was a teaching assistant for graduate computer language courses and a research assistant in neuroscience and behavioral biology. His uncle, Dr. Julius Sassenrath, headed the educational psychology department at UC Davis, and his aunt, Dr. Ethel Sassenrath, was one of the original researchers of THC at the California National Primate Research Center.
 
As I told before,its important to study the human brain to gather all the information that you will use as a solid foundations for developing good OSes. Because what's an OS ? Basically its a little brain.
All of this is complete nonsense. But I guess you'll never stop talking about stuff you don't understand.
 
All in all, I see how the architecture is indeed that of a microkernel here, but without offering the stability you'd (at least nowadays) expect from such a design....

ehy the Amiga microkernel was created on the 80's ! 10 years before Linux was born ! Can you imagine how much it was advanced for the time ? My point is : if it was invented 10 years before the birth of Linux and it was so advanced,I'm sure that Torvalds and collegues could have created a better product than that. If they had done so, today we would have a better free operating system, because in the meantime other developers would have worked on it and would have made it mature up to today. Let's say they didn't do it because they wanted to be productive right away, saving money immediately ? and tomorrow we'll see. But that tomorrow is our today. They just moved the problem forward, as Andy predicted.
 
All of this is complete nonsense. But I guess you'll never stop talking about stuff you don't understand.

Stop saying that I don't understand. Act differently : from a strict language skills I think your kind of communication is not good. Its a way better to say that we have different opinions. Or even better,that we see the phenomenon from different angles. Please learn how to communicate better without making other persons feel wrong or stupid. Thank you.
 
In real life most of the time it doesn't matter if something is technically superior, but which type of system gets more adoption.

I'm not sure at all about this. Linux was born as an hobby and Linus told from the beginning that it would have given for free. There was no hurry to gain a lot of developers to work on that,just because the main goal of that os was not to maximise the earnings. Pointing on fewer developers and more time to be completed could be an acceptable choice. Now we would have had a better os,without a bloated kernel. It didn't happen because a lot of economic interests had meet. And when is the money to move the behaviors,usually happens to make wrong choices (I mean that can prevale different reasons than those of making the best product with the most "beautiful" technologies ready in that moment).
 
Can you imagine how much it was advanced for the time ?
I can, because I had an Amiga, several actually, at that time. I know how much more advanced it was compared to anything else on the market. Sure there were UNIX workstations (SGI, DEC Alpha, etc.) that were much better, they also cost a factor of 10 more.
My point is : if it was invented 10 years before the birth of Linux and it was so advanced,I'm sure that Torvalds and collegues could have created a better product than that.
Torvalds set out to build a UNIX-like kernel, not an AmigaOS clone.
 
Torvalds set out to build a UNIX-like kernel, not an AmigaOS clone.

If the goal was to create a good product,a not an obsolete one,a product that it was technically good enough even today,he could have studied the way the Amiga kernel was made to understand some important concepts and could have used them to implement a micro kernel. In addition to the microkernel of amiga,there was differenrt examples of microkernels to study because they were already relevant on the 90's. Minix ; QNX (unix like) for example,invented on 1982,again 10 years before Linux and some others microkernel OSes mentioned on the debate with Andy.
 
If the goal was to create a good product,a not obsolete product,a product that it was technically good enough even today,he could have studied the way the Amiga kernel was made to understand some important concepts and could have used them to invent a micro kernel.
He set out to build a UNIX like kernel. Whatever else you think he should have done is irrelevant.
 
He set out to build a UNIX like kernel. Whatever else you think he should have done is irrelevant.

So if you think that its irrelevant,I stop writing. It useless for me to continue to express my opinions. Thanks for the contribution.
 
I never used Amiga but played with it some (I had a real unix workstation). But IIRC in Amiga everything lived in a single address space including the kernel. This simplified things but it is not safe for general purpose computing. In Unix process separation is fundamental so that one bad process can't take down the system. This is why implementing Unix in user code as a set of services on top of a modern microkernel has many more context switches than a monolithic kernel. Plenty of discussions can be found online if you're interested. Rehashing it here won't change anything.
 
I can, because I had an Amiga, several actually, at that time. I know how much more advanced it was compared to anything else on the market. Sure there were UNIX workstations (SGI, DEC Alpha, etc.) that were much better, they also cost a factor of 10 more.

Torvalds set out to build a UNIX-like kernel, not an AmigaOS clone.

Is relevant to ask how you would fix the problem of the Linux kernel that became too huge and bloated ?
 
But IIRC in Amiga everything lived in a single address space including the kernel.
Not exactly. The 68000 has a "supervisor" and "user" mode. The kernel ran in supervisor mode, applications ran in in "user" mode.
 
Not exactly. The 68000 has a "supervisor" and "user" mode. The kernel ran in supervisor mode, applications ran in in "user" mode.
Sorry, but CPU privileges are something entirely different than virtual memory (controlled by an MMU). The original 86000 didn't offer the latter, so, no memory protection.

BTW, surprisingly enough, the kernel did NOT run in supervisor mode either. I only learned that myself today when tracking down information about the "microkernel design"
 
Back
Top