Humble hobby OS project

A cool project. I do like seeing OS research being done because it is very much becoming a lost skill.

After exploring alternatives like Zig and Rust, I concluded that C remains the most practical, predictable, and hardware-transparent language for low-level system development on 32-bit Intel machines.
Contrast to the hard time I give Rust when it is spammed ontop of layers of C, I am quite interested in 100% pure Rust operating systems. Interestingly the name of your R4R is quite unfortunate because it has nothing to do with Rust as a language!

  • Designed to run on real hardware (i386/i486) as well as emulators like Bochs, or VirtualBox.
  • Note: QEMU is currently unsuitable for testing this build due to a known issue with legacy i486 protected-mode task switching.See QEMU Bug 2024806 – “Protected mode LJMP via TSS/LDT fails with pc=nil.”
This is very interesting. I always assumed that Qemu provided more flexible x86 emulation than VirtualBox but perhaps not. Are there plans to support Qemu in future or does Qemu need to be changed?

R4R attempts to bring all four rings into play in a coordinated and observable way.
This is also interesting. So what is your ultimate goal for this? I.e what features of R4R do you plan to provide to demonstrate this?
 
Contrast to the hard time I give Rust when it is spammed ontop of layers of C, I am quite interested in 100% pure Rust operating systems. Interestingly the name of your R4R is quite unfortunate because it has nothing to do with Rust as a language!
The original desire and inspiration was to write this in Rust, so the inspiration for R4R originally came from there. My capabilities are not that great now, so I decided to go with C and I'm still not happy with LLVM support for x86_32 because it makes a lot of gaps in the code... I started the project with a burning desire to get Rust people interested if they find this interesting.

Even if someone from Rust is interested, my code will remain 100% in C. If someone else wants to rewrite it and improve it in Rust, good luck to them! but I will not mix code between them. In the end, another Rust code repo may be opened ...
 
This is also interesting. So what is your ultimate goal for this? I.e what features of R4R do you plan to provide to demonstrate this?
The ultimate goal of R4R is to demonstrate, in a tangible and observable way, how all four Intel x86 protection rings (0–3) can coexist and interact within a single operating system — each with its own stack, TSS, LDT, and clearly defined domain.

While most modern systems only use Ring 0 and Ring 3 for simplicity, R4R aims to revive Intel’s original privilege model and show how segmentation, paging, and call gates can be combined to create real hardware-enforced isolation — not just software-level separation.

The main features that demonstrate this are:

Independent initialization of all four rings, each running its own “mili-kernel” (Core, Devs, Libs, Users).

Hardware-managed transitions using call gates and task gates (no software traps).

Per-ring TSS and LDT, showing that every ring can maintain its own execution context and memory space.

System calls implemented purely through call gates, demonstrating cross-ring communication without stack argument passing.

A static paging layout combined with segment-based isolation, proving that segmentation still works perfectly when properly configured.

In short, R4R’s goal is educational and architectural:
to prove that the x86 protection model — often dismissed as obsolete — is in fact coherent, elegant, and fully functional when used as intended.

Future versions (0.01+) will extend this with a ring-aware scheduler, interrupt handling, and message-passing between rings to illustrate real multi-domain operation.
 
Nice.
Few years ago I started my own. But then RL (kids,..) took precedence..

Out of curiosity: why did you put an extra nop in the handler?
When I started my project I hated att syntax. Ended up rewritting everything with att. :). I still tend to disassemble with intel one (objdump/gdb/ida..) but less so.
 
Okay, I've read all the documentation, at the end I see your OS on the COMPAQ CONTURA it's way cool. I'm working on a project like yours and I have to thank GPT for helping me too.

Your code is clean and clear; there are still some magic numbers. Don't you use static code analysis (clang-tidy or cppchek) ? I'm disturbed by your typedef for regular types, why you do that ?


C:
// Signed types
typedef int8_t   i8;   // 8-bit  signed integer
typedef int16_t  i16;  // 16-bit signed integer
typedef int32_t  i32;  // 32-bit signed integer
typedef int64_t  i64;  // 64-bit signed integer

// Unsigned types
typedef uint8_t  u8;   // 8-bit  unsigned integer
typedef uint16_t u16;  // 16-bit unsigned integer
typedef uint32_t u32;  // 32-bit unsigned integer
typedef uint64_t u64;  // 64-bit unsigned integer

// Pointer-sized unsigned integer (32-bit architecture assumed)
typedef u32                uptr;
 
Nice.
Few years I started my own. But then RL (kids,..) took precedence..

Out of curiosity: why did you put an extra nop in the handler?
When I started my project I hated att syntax. Ended up rewritting everything with att. :). I still tend to disassemble with intel one (objdump/gdb/ida..) but less so.
You caught that well, it's a completely unnecessary nop here, I missed that when I was testing something in the BOCHS debugger...
Deleted!
 
Okay, I've read all the documentation, at the end I see your OS on the COMPAQ CONTURA it's way cool. I'm working on a project like yours and I have to thank GPT for helping me too.

Your code is clean and clear; there are still some magic numbers. Don't you use static code analysis (clang-tidy or cppchek) ? I'm disturbed by your typedef for regular types, why you do that ?


C:
// Signed types
typedef int8_t   i8;   // 8-bit  signed integer
typedef int16_t  i16;  // 16-bit signed integer
typedef int32_t  i32;  // 32-bit signed integer
typedef int64_t  i64;  // 64-bit signed integer

// Unsigned types
typedef uint8_t  u8;   // 8-bit  unsigned integer
typedef uint16_t u16;  // 16-bit unsigned integer
typedef uint32_t u32;  // 32-bit unsigned integer
typedef uint64_t u64;  // 64-bit unsigned integer

// Pointer-sized unsigned integer (32-bit architecture assumed)
typedef u32                uptr;
I completely understand your frustration... I just like it this way because this is a completely hardware-dependent system and I want shorter but more precise type descriptions because I liked it when I was programming embedded hardware... I answered you honestly. I haven't used the cppchek analyzer before.
 
You caught that well, it's a completely unnecessary nop here, I missed that when I was testing something in the BOCHS debugger...
Deleted!
Been there, done that. :)
Seeing your code make me homesick for my project. Maybe, someday, I'll resume it. Not that I have any (really) ambitions to do anything with it .. but it's fun.
 
In src/kernels/devs/devs_task.c, line 133, what is this double cast ?

C:
base  = (u32)(u32)ldt_devs;

Compiler don't complain about ?

What are you planning to work on in the coming months ?
 
In src/kernels/devs/devs_task.c, line 133, what is this double cast ?

C:
base  = (u32)(u32)ldt_devs;

Compiler don't complain about ?
Nope. You can do real work with a series of casts, it is not just a "pretend" construct.
Example:
uint32_t i = get_some_number();
i = (uint32) (char) i;

Now i holds the sign extended 8 bit value it had before, loosing the upper bits.
i = (uint32) (uint8_t) i; is equivalent to "i &= 0x0ff;"
 
In src/kernels/devs/devs_task.c, line 133, what is this double cast ?

C:
base  = (u32)(u32)ldt_devs;

Compiler don't complain about ?

What are you planning to work on in the coming months ?
As I mentioned before, firstly, I'm an amateur, secondly, GPT-AI suggested it to me, thirdly, I understood it as necessary because the val of 64 bit LDT descriptor array address is converted to a 32 bit (later in code pointer to 32 bit var) which is needed to fill the LDT descriptor with an offset or base ...?
 
Been there, done that. :)
Seeing your code make me homesick for my project. Maybe, someday, I'll resume it. Not that I have any (really) ambitions to do anything with it .. but it's fun.
Wow man! I feel so good and touched by your words, especially if I have fueled someone's passion to continue to cultivate their project and ideas. If you find anything interesting in this project, join me, I would be happy to collaborate on this to the best of my abilities.
 
What are you planning to work on in the coming months ?
I’ve tried to explain everything I’ve learned so far in the documentation — as clearly as I can. As I go further, I often understand things retroactively, because, as you can probably tell, I’ve taken on something that’s far bigger than me.


If reading those relatively short docs feels tedious (which I understand), I’ll try to summarize my plans again — though they tend to evolve day by day for the reasons above.


My long-term goal is to bring this current version 0.00 concept to Intel x86_64 architecture, preserving the same 4-ring model.
The short-term goal (for version 0.01) is what I’ve already mentioned in the documentation — something closer to my first attempt, 4RING_OS (linked at the top of the README).


That version will demonstrate keyboard interaction and how input events propagate across rings, each performing its part of the system’s workflow.


I also plan to implement a basic autoconfig stage at startup to enable dynamic memory management — so memory is no longer statically fixed at 8 MB (which was necessary for demonstration on real old hardware).


😅 Please have some mercy — English is not my native language — but I hope the idea still comes across clearly!
 
And now, a bold question for you — my dear friends here on the forum:

Do you think that the idea behind my little “project” deserves any attention on a platform where it could reach people who are genuinely interested in such things?

My greatest passion, first as an electronics enthusiast and then as a self-taught programmer, is simply to show how the hardware protection rings actually work — even if that concept might seem outdated today.
I truly believe it’s important, at least from a historical and educational point of view, that a demonstration like this exists somewhere, as a small proof of what these mechanisms were capable of.

To be honest, I don’t feel ready to handle deep technical questions from professionals — at least not yet. Maybe this project still needs to mature a bit more, to show more stable functionality… and perhaps I need to mature along with it.
 
Don't worry about 'professionals' or 'being mature'', none of that matters. Just dive right in. You've already proved you know what you're talking about. :)
There's a lot of people who like electronics on here too (and all other kinds of engineering).

Remember... when you're struggling trying to figure some horrible problem out, debug something you don't understand... that's when you're learning and growing. and that applies to all of us. None of us knows it all (well, maybe Crivens... 😁 ), and everyone here was young once and doing their first projects.
 
Nope. You can do real work with a series of casts, it is not just a "pretend" construct.
Example:
uint32_t i = get_some_number();
i = (uint32) (char) i;

Now i holds the sign extended 8 bit value it had before, loosing the upper bits.
i = (uint32) (uint8_t) i; is equivalent to "i &= 0x0ff;"
Ok I understand, with multiple explicit cast you can change data. But I still don't get what a double cast of the same type is intended to do ?

In src/kernels/devs/devs_task.c

C:
uint64_t ldt_devs[LDT_ENTRIES] = {0};
uint32_t base;

base  = (uint32_t)(uint32_t)ldt_devs;

With one cast (uint32_t) the compiler complains about conversion to integer from pointer ?
 
Do you think that the idea behind my little “project” deserves any attention on a platform where it could reach people who are genuinely interested in such things?

Which platform ? I'm interested in your project and I'll be happy to read about your journey in the x86 protected mode here or on github.
 
Back
Top