The Case for Rust (in the base system)

It is almost certainly a culture thing. A decent developer will be responsible with dependencies.

Non-C languages will generally always need more dependencies for the bindings where i.e SWIG/bindgen is ineffective. Much of crates.io is bindings.

But if you look around at almost any Rust project, it pulls in a lot of needless shite. Look at gstat-rs (from your first post) for example.


It is a little ridiculous.

I also note that a lot of those crates have version number smaller than 1.0. I'm not sure how stable a system you can build on such a foundation.
 
I also note that a lot of those crates have version number smaller than 1.0. I'm not sure how stable a system you can build on such a foundation.
Exactly. If it were me (and for gstat, a simple tool), I would only have one dependency on the mature platform libgeom(3) (just like gstat) and avoid the rest of the immature mess.

But Rust has the NPM culture of "dependencies first". I think that is its number one problem. It isn't avoiding "reinventing the wheel". It is "smearing mud on the car".

If bindgen cannot directly integrate libgeom with Rust, I would use C or C++ which can.
 
Version numbers don't mean anything. API breakage is directly correlated to the update rate, which is a very project-dependent thing.
The libusb Rust binding is also much more fragile than a typical C program using libusb.

This is because the nature of bindings is that they need to wrap the entire library and provide full coverage of the API. Whereas typical usage of a library only leverage a small proportion of API functionality. So if a minor part of the API changes, Rust bindings (which are generally rotten anyway) break.

Bindgen *needs* to eliminate need for these crates.io bindings or Rust is a non-starter in any domain other than gluing together a bunch of crap like Python.

But since we are on page 23 of more "discussions on Rust", it has all been said before. Its a known problem, it can't be solved by talking. The Rust developers have a lot of work to do.
 
Library dependencies, esp. versioned dependencies like those in Rust/Python/Ruby - they are a big reason why upgrading software is such a pain. I mean, in case of ports, www/qt5-webengine used to rely on Python 2.7 for a pretty long time (way past Python's 2.x EOL date), because rewriting the original project to accommodate up-to-date Python (3.11 these days) was just a LOT of work.

I'd say that's an argument against using such languages for FreeBSD's base/kernel. If you have problems with C's memory leaks and race conditions, just use more C to check boundaries reported by the hardware driver and/or the C compiler in use. LLVM's C frontend is actually stricter than GCC's.
 
There is nothing interesting about Zig. Might as well rewrite things in Pascal.

Dunno about Zig, but there are worse things than (modern, i.e. Free) Pascal.

My last three serious programs are self contained Pascal; all libraries incorporated. Big, yeah, but really fast.

Not expecting a serious response; nowadays folks have 'ancient yet formally correct teaching language' imprinted upon their brains in programming kindergarten ;-}
 
Library dependencies, esp. versioned dependencies like those in Rust/Python/Ruby - they are a big reason why upgrading software is such a pain. I mean, in case of ports, www/qt5-webengine used to rely on Python 2.7 for a pretty long time (way past Python's 2.x EOL date), because rewriting the original project to accommodate up-to-date Python (3.11 these days) was just a LOT of work.

Such dealings with outdated dependencies are an even bigger pain in dynamically typed languages like Python and Ruby. You can basically not know whether you have adjusted all the places you need to adjust.
 
[...] used to rely on Python 2.7 for a pretty long time (way past Python's 2.x EOL date), because rewriting the original project to accommodate up-to-date Python (3.11 these days) was just a LOT of work
I'd think and expect that language designers, ISO committees and the like, have learned from this "major-breaking-update" history. Either plan for a gradual evolution, or be better prepared to support a "revolutionary" upgrade with (a lot) better transitioning tools for programs and in particular libraries.
 
I'd think and expect that language designers, ISO committees and the like, have learned from this "major-breaking-update" history. Either plan for a gradual evolution, or be better prepared to support a "revolutionary" upgrade with (a lot) better transitioning tools for programs and in particular libraries.

How? Take the major Python3 change: the semantics of string literals. How do you make such a change easier to deal with when you make it?
 
Not expecting a serious response; nowadays folks have 'ancient yet formally correct teaching language' imprinted upon their brains in programming kindergarten ;-}
Oh I don't know... I also do use FPC for projects which have a gui and need to be portable. The last one builds without changes on windows, Linux, FreeBSD and Haiku. It's a good tool for the job. As for the kindergarten - there are two kinds of idiots. Those that say "this is old and therefore good" and those who say "this is new and therefore better".

For FPC you could say it follows the Unix way - do one thing and do it good.
 
How? Take the major Python3 change: the semantics of string literals. How do you make such a change easier to deal with when you make it?
You can't, not always. Sometimes you have to deal with the awkwardness, the inconsistencies and ambiguities of the past (language) that have to be cleaned up in the new (version of the) language. (Semantic) ambiguities form one aspect why the transition from Python 2 to Python 3 proved to be much more difficult than anticipated. The emergence of Unicode certainly had its impact felt everywhere in the IT sector: string handling in programming languages has most likely been one of the most severe areas, even in this day and age. IMHO the only thing you can do at that point is communicate such issues clearly and know thy customers. The hard work in the prevalent code space remains. Give relief on other issues whenever and wherever possible. Better language design would have helped surely; easier said than done when that applies to past events at its inception. Language design is hard, so is changing it on the run.

As to Unicode, I've been there at the time when XML was born at the forefront of the transition of SGML to XML. XML brought many other things to its novel domain, not everything just as successful. SGML and XML aren’t programming languages, although there are programming languages associated to both. In the end, IMHO, XML sec—as a markup language standard—hit all the right spots where SGML had been proved not being able to keep up with new demands given the advancement of technology. Note that there wouldn't be any HTML without SGML ... and, of course, without Tim Berners-Lee.

I have great respect for Guido van Rossum. I fall short in any relevant aspect of language design in that respect; therefore, do take my remarks in that context. Great to watch: Guido van Rossum: BDFL Python 3 retrospective. One transition aspect that Guido van Rossum mentions is the upfront decision not to offer a transition path for co-existence of python 2 and python 3 code.
 
Last edited:
Back
Top