Contrast to the hard time I give Rust when it is spammed ontop of layers of C, I am quite interested in 100% pure Rust operating systems. Interestingly the name of your R4R is quite unfortunate because it has nothing to do with Rust as a language!After exploring alternatives like Zig and Rust, I concluded that C remains the most practical, predictable, and hardware-transparent language for low-level system development on 32-bit Intel machines.
This is very interesting. I always assumed that Qemu provided more flexible x86 emulation than VirtualBox but perhaps not. Are there plans to support Qemu in future or does Qemu need to be changed?
- Designed to run on real hardware (i386/i486) as well as emulators like Bochs, or VirtualBox.
- Note: QEMU is currently unsuitable for testing this build due to a known issue with legacy i486 protected-mode task switching.See QEMU Bug 2024806 – “Protected mode LJMP via TSS/LDT fails with pc=nil.”
This is also interesting. So what is your ultimate goal for this? I.e what features of R4R do you plan to provide to demonstrate this?R4R attempts to bring all four rings into play in a coordinated and observable way.
Your link isn't one.According to the question I asked here:
Updated!Your link isn't one.
The original desire and inspiration was to write this in Rust, so the inspiration for R4R originally came from there. My capabilities are not that great now, so I decided to go with C and I'm still not happy with LLVM support for x86_32 because it makes a lot of gaps in the code... I started the project with a burning desire to get Rust people interested if they find this interesting.Contrast to the hard time I give Rust when it is spammed ontop of layers of C, I am quite interested in 100% pure Rust operating systems. Interestingly the name of your R4R is quite unfortunate because it has nothing to do with Rust as a language!
The ultimate goal of R4R is to demonstrate, in a tangible and observable way, how all four Intel x86 protection rings (0–3) can coexist and interact within a single operating system — each with its own stack, TSS, LDT, and clearly defined domain.This is also interesting. So what is your ultimate goal for this? I.e what features of R4R do you plan to provide to demonstrate this?
// Signed types
typedef int8_t i8; // 8-bit signed integer
typedef int16_t i16; // 16-bit signed integer
typedef int32_t i32; // 32-bit signed integer
typedef int64_t i64; // 64-bit signed integer
// Unsigned types
typedef uint8_t u8; // 8-bit unsigned integer
typedef uint16_t u16; // 16-bit unsigned integer
typedef uint32_t u32; // 32-bit unsigned integer
typedef uint64_t u64; // 64-bit unsigned integer
// Pointer-sized unsigned integer (32-bit architecture assumed)
typedef u32 uptr;
You caught that well, it's a completely unnecessary nop here, I missed that when I was testing something in the BOCHS debugger...Nice.
Few years I started my own. But then RL (kids,..) took precedence..
Out of curiosity: why did you put an extra nop in the handler?
When I started my project I hated att syntax. Ended up rewritting everything with att.. I still tend to disassemble with intel one (objdump/gdb/ida..) but less so.
I completely understand your frustration... I just like it this way because this is a completely hardware-dependent system and I want shorter but more precise type descriptions because I liked it when I was programming embedded hardware... I answered you honestly. I haven't used the cppchek analyzer before.Okay, I've read all the documentation, at the end I see your OS on the COMPAQ CONTURA it's way cool. I'm working on a project like yours and I have to thank GPT for helping me too.
Your code is clean and clear; there are still some magic numbers. Don't you use static code analysis (clang-tidy or cppchek) ? I'm disturbed by your typedef for regular types, why you do that ?
C:// Signed types typedef int8_t i8; // 8-bit signed integer typedef int16_t i16; // 16-bit signed integer typedef int32_t i32; // 32-bit signed integer typedef int64_t i64; // 64-bit signed integer // Unsigned types typedef uint8_t u8; // 8-bit unsigned integer typedef uint16_t u16; // 16-bit unsigned integer typedef uint32_t u32; // 32-bit unsigned integer typedef uint64_t u64; // 64-bit unsigned integer // Pointer-sized unsigned integer (32-bit architecture assumed) typedef u32 uptr;
Been there, done that.You caught that well, it's a completely unnecessary nop here, I missed that when I was testing something in the BOCHS debugger...
Deleted!
Nope. You can do real work with a series of casts, it is not just a "pretend" construct.In src/kernels/devs/devs_task.c, line 133, what is this double cast ?
C:base = (u32)(u32)ldt_devs;
Compiler don't complain about ?
As I mentioned before, firstly, I'm an amateur, secondly, GPT-AI suggested it to me, thirdly, I understood it as necessary because the val of 64 bit LDT descriptor array address is converted to a 32 bit (later in code pointer to 32 bit var) which is needed to fill the LDT descriptor with an offset or base ...?In src/kernels/devs/devs_task.c, line 133, what is this double cast ?
C:base = (u32)(u32)ldt_devs;
Compiler don't complain about ?
What are you planning to work on in the coming months ?
Wow man! I feel so good and touched by your words, especially if I have fueled someone's passion to continue to cultivate their project and ideas. If you find anything interesting in this project, join me, I would be happy to collaborate on this to the best of my abilities.Been there, done that.
Seeing your code make me homesick for my project. Maybe, someday, I'll resume it. Not that I have any (really) ambitions to do anything with it .. but it's fun.
I’ve tried to explain everything I’ve learned so far in the documentation — as clearly as I can. As I go further, I often understand things retroactively, because, as you can probably tell, I’ve taken on something that’s far bigger than me.What are you planning to work on in the coming months ?
Ok I understand, with multiple explicit cast you can change data. But I still don't get what a double cast of the same type is intended to do ?Nope. You can do real work with a series of casts, it is not just a "pretend" construct.
Example:
uint32_t i = get_some_number();
i = (uint32) (char) i;
Now i holds the sign extended 8 bit value it had before, loosing the upper bits.
i = (uint32) (uint8_t) i; is equivalent to "i &= 0x0ff;"
uint64_t ldt_devs[LDT_ENTRIES] = {0};
uint32_t base;
base = (uint32_t)(uint32_t)ldt_devs;
It doesn't complain when it compiles for me. I initially put one cast but as I said, AI advised me to have 2 ...With one cast (uint32_t) the compiler complains about conversion to integer from pointer ?
Do you think that the idea behind my little “project” deserves any attention on a platform where it could reach people who are genuinely interested in such things?
I was thinking of these platforms: Hacker News, Reddit, OSDev, and the like.Which platform ? I'm interested in your project and I'll be happy to read about your journey in the x86 protected mode here or on github.