What about using the L4Re microkernel to boot FreeBSD on a mobile device (or whatever else kind of device) ?

Sorry, but CPU privileges are something entirely different than virtual memory (controlled by an MMU). The original 86000 didn't offer the latter, so, no memory protection.
Yes, that's true. That's why I said, not exactly. Because the supervisor and user mode had different stacks, which are also memory. You can't access the supervisor mode stack from user mode. There were a few other things you couldn't do from user mode but I forgot the details. Later models (and some accelerator cards) did incorporate an MMU. The A4000/040 for example had an 68040 with builtin MMU and the OS could make use of it. I have a A4000/030, which has the EC030 model, so no MMU (no way to run NetBSD on it :( ).
 
Is this irony ?
No.
The address $4 helt the pointer, as in
Code:
    move.l 4.w,a6
    jsr        _LVOAllocMem(a6)
Getting the pointer from -4 would fetch outside of any memory, you should end up reading $ffffffff. That would jump to the misaligned instruction handler.
The original 86000
Dude, you had too much little endian lately... ;)

SirDice I still have a 3000T, with a 68060/604e. It ran NetBSD quite fine.
 
Yes, that's true. That's why I said, not exactly. Because the supervisor and user mode had different stacks, which are also memory. You can't access the supervisor mode stack from user mode. There were a few other things you couldn't do from user mode but I forgot the details. Later models (and some accelerator cards) did incorporate an MMU. The A4000/040 for example had an 68040 with builtin MMU and the OS could make use of it. I have a A4000/030, which has the EC030 model, so no MMU.
68k supervisor mode used a different stack ptr but 68k had very little intrinsic protection. The first Unix system I worked on was based on 68k. We had to add 4 external "registers" which mediated access to memory. Supervisor mode had access to these memory mapped registers, you couldn't access them from user mode, but all that logic was *external* to the 68k. For every process context switch, the kernel had to reload these 4 registers!
 
I think it will run for longer than we live. Practically 90% of todays servers seem Linux to me...

What happens if Linus can't debug the kernel anymore because it became too bloated ? Even today he has some difficult. So,in 5/10 years that task will become even more complicated.
 
No, just a very sensible correction with a little grain of salt.
Crivens brainfart was probably because all offsets for library functions were negative, it seems the "library base" pointer always pointed to the end of the library 😉
Sorry to be the highly functional aspi 😇

The base pointer pointed to the library base structure which held the library internal data. At a negative offset were the jumps to library routines. Something like an object, really, with member functions. The "first" 4 were Open/Close/Expunge and a reserved one if memory serves me right.

For the virtual memory, I still have the manual for the external MMU in the cupboard. Variable page sizes beginning at 128 bytes... Great stuff for AI at the time.

SirDice I had a sponsor. And I still like the power architecture. It is simply clean. Great times.
 
What happens if Linus can't debug the kernel anymore because it became too bloated ? Even today he has some difficult. So,in 5/10 years that task will become even more complicated.
No, that's bollocks again. A monolith doesn't (necessarily) mean chaotic structure. Today's monolithic kernels are "modular monoliths" (that's the case for Linux and FreeBSD among others). They have very clear structure and separation of concerns inside, even to the extent of using in-kernel threads and the like. What's missing is technically enforced boundaries (using hardware features). Very similar to AmigaOS actually, where this wasn't possible (no MMU in the original models) ... and the somewhat usable bit ("supervisor mode" of the 68000) wasn't used.
 
Dude, you had too much little endian lately... ;)
Looks more like middle-endian to me, but oh so boring, just a typo 🙈

BTW, little-endian is how our number system works. It becomes immediately obvious once you look at the arabic writing direction 🤪
 
Oh, there is also pdp-endian. 2143 wasn't it? But claiming little endian is natural, them being brawling words, ye hear me? 😉
Can't stand that mess, when I was a kid those bytes knew how to behave. You could read those hex dumps, you could. Not any more. Man I'm getting old.
 
But claiming little endian is natural, them being brawling words, ye hear me? 😉
😂
Well I guess you are aware our number symbols are arabic and their writing direction is right-to-left, so, the inventors of this system always wrote the least-significant position first, IOW, "little endian" :cool:

You don't have to like it of course, but you can't change it 😈
1704740587763.png
 
Sanskrit in origin actually :D
Where did they get it from? Anyway, who cares. You see someone reading a book written vertically, turning pages from back to front (or so it looks) and it contains at least 3 different symbol types in as many directions - well, what is normal anyway?
 
Please compare these two situations :

---> Exec is the kernel of AmigaOS. It is a 13 KB multitasking microkernel
---> https://www.crn.com/news/applications-os/220100662/torvalds-calls-linux-kernel-huge-and-bloated
---> https://news.ycombinator.com/item?id=10813338

and now tell me if the huge and bloated Linux kernel makes you think or not that Andy was in some way right when he said that using a single macro C routine OS in the '90s' was wrong. He didn't talk specifically about the fact that Linux would become bloated,but he knew,as well as Torvalds and others,that this would happened. But they didn't talk about this,I think,for opposite reasons. But the point is always the same. The microkernel architecture solves that problem. And maybe Linus soon should port Linux to be a microkernel,because I suspect that there isn't an easy and different way to streamline it. For how long time Linux can continue to grow,before to becomes no debuggable anymore ?

Microkernels do solve exactly which important problem? The only problem they do solve is that the kernel by itself really is tiny or can be tiny. But since we do have file systems, hardware drivers and so on this stuff still is around, just in userland. And if you want to compare it in a proper way, you must take these into account, because a micro kernel without these drivers is useless.

Meaning: the package of micro kernel + plus drivers definitely can also be complex enough and bloated.

Aside that: Linux can be broken down enough that it runs on embedded systems well enough. Microkernels really do shine where real time operating systems are needed, low memory and crash resistance. Stuff like flight systems, nuclear reactor controls or maybe satellites.
 
Microkernels do solve exactly which important problem? The only problem they do solve is that the kernel by itself really is tiny or can be tiny. But since we do have file systems, hardware drivers and so on this stuff still is around, just in userland. And if you want to compare it in a proper way, you must take these into account, because a micro kernel without these drivers is useless.

Meaning: the package of micro kernel + plus drivers definitely can also be complex enough and bloated.

Aside that: Linux can be broken down enough that it runs on embedded systems well enough. Microkernels really do shine where real time operating systems are needed, low memory and crash resistance. Stuff like flight systems, nuclear reactor controls or maybe satellites.

---> Microkernels do solve exactly which important problem? The only problem they do solve is that the kernel by itself really is tiny or can be tiny.

exactly the problem that Linus should fix and that it would not have existed at all if he had opted for a micro kernel ? I read that the linux kernel is growing very fast. Now it is around 20 mb...that compared to few kbs of a microkernel is a lot...I don't know man...but I know that if something becomes too much complicated to manage,various and serious problems will come and they will be no more fixable over a certain measure (I'm talking in a general way,trying to use a common sense,I'm not an OS architect).
 
The criticisms about the Linux kernel are even worse :


but no one has the courage to admit that Andy Tanembaum basically,from a strict technical point of view,that's what counts really, he was right on the 90's and still today no one blame Linus for his relatively bad architectural choice did on the '90. How much time do you need to give to Andy the right amount of authority and take some of it away from Linus ?

I quote the words verbatim :

In an interview with German newspaper Zeit Online in November 2011, Linus Torvalds stated that Linux has become "too complex" and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is "afraid of the day" when there will be an error that "cannot be evaluated anymore."

Andrew Morton, one of Linux kernel lead developers, explains that many bugs identified in Linux are never fixed:


Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem. Assuming there's a difference of opinion here, where do you think it comes from? How can we resolve it?

A: I used to think [code quality] was in decline, and I think that I might think that it still is. I see so many regressions which we never fix.

Theo de Raadt, founder of OpenBSD, compares OpenBSD development process to Linux:


"Linux has never been about quality. There are so many parts of the system that are just these cheap little hacks, and it happens to run.” As for Linus Torvalds, who created Linux and oversees development, De Raadt says, “I don’t know what [Linus's] focus is at all anymore, but it isn’t quality.”

do you really think that this situation would have arisen anyway if he had chosen to create a microkernel os ? just only try to be honest,guys.
 
Funny that you think you know better than Linus and the FreeBSD devs…
If everyone you see is a ghost rider, maybe you are the ghost rider?
 
Funny that you think you know better than Linus and the FreeBSD devs…
If everyone you see is a ghost rider, maybe you are the ghost rider?

Wrong. I don't know better than you. I'm only quoting their words. And I'm trying to understand what can be true,what is exaggerated and what it isn't. For sure I have my own ideas. But the ideas are there to be refuted.
 
That's what you think. At this time I'm collecting replies. Do you want that I accept an answer so easily ? I need to collect more replies to create a consolidated and realistic theory inside my mind.
 
---> Microkernels do solve exactly which important problem? The only problem they do solve is that the kernel by itself really is tiny or can be tiny.

exactly the problem that Linus should fix and that it would not have existed at all if he had opted for a micro kernel ? I read that the linux kernel is growing very fast. Now it is around 20 mb...that compared to few kbs of a microkernel is a lot...I don't know man...but I know that if something becomes too much complicated to manage,various and serious problems will come and they will be no more fixable over a certain measure (I'm talking in a general way,trying to use a common sense,I'm not an OS architect).

And why should Linus Torvalds fix that "problem" in your opinion? Linux is doing fine, it's the top OS nowadays for top 500 super computers with a market share of 98.X%ish or so there, Android is everywhere and it rules the server landscape.

You do see problems where are none. And you do see major advantages where are barely to none.
 
At this time I'm collecting replies.
And all these replies tell you exactly the same thing: No, Linux doesn't need any "fixing" (neither does the FreeBSD kernel). Still you insist otherwise. What's the point of this? If you want to make a valid point, you have to learn the theory first.

The way to make software robust, reliable, maintainable and changeable is to give it a sane structure. That's true for every piece of software, including an OS kernel. There are a lot of design principles out there giving guidance how to do it. E.g. a well known collection is SOLID. Today's "monolithic" kernels aren't "big balls of mud" as they sometimes were many decades ago, they are "modular monoliths", and work is often going on to fix inner design issues (like e.g. eliminating some global locks both in Linux and FreeBSD kernel).

The microkernel architecture enforces a structure with clear responsibilities and boundaries, therefore it is in theory the superior architecture. In practice, what a microkernel architecture could give you today is reducing the impact of bugs: Anything not affecting the microkernel itself won't bring down the whole system. That's of course only true if hardware features are used to enforce boundaries at runtime (which, again, wasn't done in AmigaOS, partially for lacking hardware support) and the affected kernel service can just "respawn" without further issues. Given today's monolithic kernels have inner structure, it just makes no sense at all to attempt a full rewrite (which would be necessary) to achieve just this theoretical advantage.

BTW, providing a "unixy" kernel in microkernel architecture is exactly what GNU Hurd is attempting for almost 34 years now ....
 
Back
Top