The Case for Rust (in the base system)

It is almost certainly a culture thing. A decent developer will be responsible with dependencies.

Non-C languages will generally always need more dependencies for the bindings where i.e SWIG/bindgen is ineffective. Much of crates.io is bindings.

But if you look around at almost any Rust project, it pulls in a lot of needless shite. Look at gstat-rs (from your first post) for example.


It is a little ridiculous.

I also note that a lot of those crates have version number smaller than 1.0. I'm not sure how stable a system you can build on such a foundation.
 
I also note that a lot of those crates have version number smaller than 1.0. I'm not sure how stable a system you can build on such a foundation.
Exactly. If it were me (and for gstat, a simple tool), I would only have one dependency on the mature platform libgeom(3) (just like gstat) and avoid the rest of the immature mess.

But Rust has the NPM culture of "dependencies first". I think that is its number one problem. It isn't avoiding "reinventing the wheel". It is "smearing mud on the car".

If bindgen cannot directly integrate libgeom with Rust, I would use C or C++ which can.
 
Version numbers don't mean anything. API breakage is directly correlated to the update rate, which is a very project-dependent thing.
The libusb Rust binding is also much more fragile than a typical C program using libusb.

This is because the nature of bindings is that they need to wrap the entire library and provide full coverage of the API. Whereas typical usage of a library only leverage a small proportion of API functionality. So if a minor part of the API changes, Rust bindings (which are generally rotten anyway) break.

Bindgen *needs* to eliminate need for these crates.io bindings or Rust is a non-starter in any domain other than gluing together a bunch of crap like Python.

But since we are on page 23 of more "discussions on Rust", it has all been said before. Its a known problem, it can't be solved by talking. The Rust developers have a lot of work to do.
 
Library dependencies, esp. versioned dependencies like those in Rust/Python/Ruby - they are a big reason why upgrading software is such a pain. I mean, in case of ports, www/qt5-webengine used to rely on Python 2.7 for a pretty long time (way past Python's 2.x EOL date), because rewriting the original project to accommodate up-to-date Python (3.11 these days) was just a LOT of work.

I'd say that's an argument against using such languages for FreeBSD's base/kernel. If you have problems with C's memory leaks and race conditions, just use more C to check boundaries reported by the hardware driver and/or the C compiler in use. LLVM's C frontend is actually stricter than GCC's.
 
There is nothing interesting about Zig. Might as well rewrite things in Pascal.

Dunno about Zig, but there are worse things than (modern, i.e. Free) Pascal.

My last three serious programs are self contained Pascal; all libraries incorporated. Big, yeah, but really fast.

Not expecting a serious response; nowadays folks have 'ancient yet formally correct teaching language' imprinted upon their brains in programming kindergarten ;-}
 
Library dependencies, esp. versioned dependencies like those in Rust/Python/Ruby - they are a big reason why upgrading software is such a pain. I mean, in case of ports, www/qt5-webengine used to rely on Python 2.7 for a pretty long time (way past Python's 2.x EOL date), because rewriting the original project to accommodate up-to-date Python (3.11 these days) was just a LOT of work.

Such dealings with outdated dependencies are an even bigger pain in dynamically typed languages like Python and Ruby. You can basically not know whether you have adjusted all the places you need to adjust.
 
[...] used to rely on Python 2.7 for a pretty long time (way past Python's 2.x EOL date), because rewriting the original project to accommodate up-to-date Python (3.11 these days) was just a LOT of work
I'd think and expect that language designers, ISO committees and the like, have learned from this "major-breaking-update" history. Either plan for a gradual evolution, or be better prepared to support a "revolutionary" upgrade with (a lot) better transitioning tools for programs and in particular libraries.
 
I'd think and expect that language designers, ISO committees and the like, have learned from this "major-breaking-update" history. Either plan for a gradual evolution, or be better prepared to support a "revolutionary" upgrade with (a lot) better transitioning tools for programs and in particular libraries.

How? Take the major Python3 change: the semantics of string literals. How do you make such a change easier to deal with when you make it?
 
Not expecting a serious response; nowadays folks have 'ancient yet formally correct teaching language' imprinted upon their brains in programming kindergarten ;-}
Oh I don't know... I also do use FPC for projects which have a gui and need to be portable. The last one builds without changes on windows, Linux, FreeBSD and Haiku. It's a good tool for the job. As for the kindergarten - there are two kinds of idiots. Those that say "this is old and therefore good" and those who say "this is new and therefore better".

For FPC you could say it follows the Unix way - do one thing and do it good.
 
How? Take the major Python3 change: the semantics of string literals. How do you make such a change easier to deal with when you make it?
You can't, not always. Sometimes you have to deal with the awkwardness, the inconsistencies and ambiguities of the past (language) that have to be cleaned up in the new (version of the) language. (Semantic) ambiguities form one aspect why the transition from Python 2 to Python 3 proved to be much more difficult than anticipated. The emergence of Unicode certainly had its impact felt everywhere in the IT sector: string handling in programming languages has most likely been one of the most severe areas, even in this day and age. IMHO the only thing you can do at that point is communicate such issues clearly and know thy customers. The hard work in the prevalent code space remains. Give relief on other issues whenever and wherever possible. Better language design would have helped surely; easier said than done when that applies to past events at its inception. Language design is hard, so is changing it on the run.

As to Unicode, I've been there at the time when XML was born at the forefront of the transition of SGML to XML. XML brought many other things to its novel domain, not everything just as successful. SGML and XML aren’t programming languages, although there are programming languages associated to both. In the end, IMHO, XML sec—as a markup language standard—hit all the right spots where SGML had been proved not being able to keep up with new demands given the advancement of technology. Note that there wouldn't be any HTML without SGML ... and, of course, without Tim Berners-Lee.

I have great respect for Guido van Rossum. I fall short in any relevant aspect of language design in that respect; therefore, do take my remarks in that context. Great to watch: Guido van Rossum: BDFL Python 3 retrospective. One transition aspect that Guido van Rossum mentions is the upfront decision not to offer a transition path for co-existence of python 2 and python 3 code.
 
Last edited:
Demo of Rust in the base system:

"
From: Alan Somers <asomers_at_freebsd.org>
Date: Sun, 04 Aug 2024 17:55:26 UTC
Due to all of the recent discussion of using Rust for code in the
FreeBSD base, I've put together a demo of what it might look like. It
demonstrates:

* Interspersing Rust crates through the tree (usr.bin/nfs-exporter,
cddl/usr.bin/ztop, etc) rather than in some special directory.
* Build integration for all Rust crates. You can build them all with
a single "cargo build" command from the top level, and test them all
with a single "cargo test".
* Wholly new programs written from scratch in Rust (ztop plus three
Prometheus exporters)
* Old programs rewritten in Rust with substantial new features (gstat and fsx)
* Libs (freebsd-libgeom and freebsd-libgeom-sys)
* Commits that reconcile the dependencies of multiple crates, so as to
minimize duplicate dependency versions (5764fb383d4 and 1edf2e19e50)
* Vendoring all dependencies, direct and transitive, to ensure
internet-independent and reproducible builds (37ef9ffb6a6). This
process is automated and requires almost no manual effort. Note:
don't panic if you look in the "vendor" directory and see a bunch of
crates with "windows" in the name. They're all just empty stubs.
* All Rust object files get stored in the "target" directory rather
than /usr/obj. Today, if you want them to be stored in /usr/obj the
best way is to use a symlink, though there's WIP to add
MAKEOBJDIRPREFIX-like functionality to Cargo.

It does NOT demonstrate:

* Integrating the Rust build system with Make. Warner has some ideas
about how to do that.
* Pulling rustc into contrib. This tree requires an external Rust toolchain.
* Building any cdylib libraries with Rust. That's useful if you want
a C program to call a Rust library, but I don't have any good examples
for it.
* kernel modules. As already discussed, those are hard.
* Any Rust crates that involve private APIs, like CTL stuff. Those
are among the most tantalizing programs to move from ports to base,
but nobody's written any yet, because Rust-in-base doesn't exist yet.

Also, I want to address a question that's popped up a few times:
backwards-compatibility. There is a fear that Rust code needs to be
updated for each new toolchain release. But that's not true. It
hasn't been true for most crates since Rust 1.0 was released about a
decade ago. A few exotic crates required "nightly" features after
that, but they are very few in number these days, and none of them are
included in this branch's vendored sources. What Rust _does_ do is it
releases a new toolchain about every six weeks. Each new release
typically includes a few new features in the standard library and they
often add more compiler warnings, too. Sometimes they include wholly
new compiler features, but they are _always_ backwards compatible with
existing syntax. Roughly every three years, Rust publishes a new
"Edition". Rust Editions are very similar to C++ versions. i.e. Rust
2018 is to Rust 2021 as C++14 is to C++17. New editions can include
backwards-incompatible syntax changes, but each crate always knows
which Edition it uses. Crates of different Editions can be linked
together in the same build. This branch, for example, contains crates
using Editions 2015, 2018, and 2021.

If you have any questions about what Rust in Base would look like,
please examine this branch. And if you've never used Rust before, I
highly encourage you to try it. It really is the best new
systems-program language for decades. IMHO, it's the only one that's
a compelling replacement for C++ in all new applications, and C in
most.

"
 
I don't think it is a secret I am skeptical of Rust. However *some* of the timeline looks pretty responsible. However this part in particular is misleading.
Rust Editions are very similar to C++ versions. i.e. Rust
2018 is to Rust 2021 as C++14 is to C++17.
No they aren't. Everyone knows that Rust isn't standardized in any way or form.
New editions can include backwards-incompatible syntax changes
Exactly. This is damaging.
but each crate always knows which Edition it uses.
Just like NPM, CPAN and PIP. They are a mess. This will be a mess.


Basically I would like to see the following happen for at least a few years before Rust is pulled into FreeBSD.
*Wholly new programs written from scratch in Rust (ztop plus three
Prometheus exporters)
I am fairly sure that this will give enough indication if Rust is going to be feasible or not. It will also mean that the substantial binding layers needed can be explored / developed / maintained (and possibly discarded) without introducing mess into FreeBSD. Once a new language is added to an OS, it is very difficult to remove it again. They should take their time.

* Old programs rewritten in Rust with substantial new features (gstat and fsx)
* Libs (freebsd-libgeom and freebsd-libgeom-sys)
These should be done in C first.
 
I don't think it is a secret I am skeptical of Rust. However *some* of the timeline looks pretty responsible. However this part in particular is misleading.

No they aren't. Everyone knows that Rust isn't standardized in any way or form.

Exactly. This is damaging.

Just like NPM, CPAN and PIP. They are a mess. This will be a mess.


Basically I would like to see the following happen for at least a few years before Rust is pulled into FreeBSD.

I am fairly sure that this will give enough indication if Rust is going to be feasible or not. It will also mean that the substantial binding layers needed can be explored / developed / maintained (and possibly discarded) without introducing mess into FreeBSD. Once a new language is added to an OS, it is very difficult to remove it again. They should take their time.


These should be done in C first.
I've posted something (at least partially) overwrapping suspision to freebsd-hackers ML.
 
Just like NPM, CPAN and PIP. They are a mess. This will be a mess.
Exactly. It already is a mess.

Note: don't panic if you look in the "vendor" directory and see a bunch of crates with "windows" in the name. They're all just empty stubs.
...
* Integrating the Rust build system with Make. Warner has some ideas about how to do that.
So you wind up with two incompatible build systems. The hard part of adopting either in favor of the other is TBD, of course. And the thing about "windows stubs" makes me think Cargo has been kludged into base with lots of loose ends.

Edit: It turns out PHK has already raised these points in the hackers ML. He explains the problem much better than I do:
 
Its a good approach. They realize that Rust is never going to see the update required to replace C or C++. The only solution to remain relevant is basically a glorified binding generation system.

If the AI can make the code "Rust-centric" rather than just all in an unsafe{} will be interesting. If it can, that refactor could just go towards cleaning up the C and C++ code and making it safe, rendering Rust completely redundant in the first place.

It will basically be a static analyzer on steroids for C and C++. A great idea.
 
If the AI can make the code "Rust-centric" rather than just all in an unsafe{} will be interesting. If it can, that refactor could just go towards cleaning up the C and C++ code and making it safe, rendering Rust completely redundant in the first place.
I don't want any "AI" (i.e. nothing "intelligent" but only a bunch of text-bashing algorithms...) generated crap running on my servers...

OpenBSD already strictly forbid any use of AI-generated code and IMHO that's the correct and only way to deal with that hype around some glorified chatbots...
 
If it would involve some intelligence in the first place... But I concur with cracauer@ , fixing that code will be a nightmare.
 
Back
Top