Solved Is Rust now killing C++ and eventually C with it ?

FWIW in a big system in $OLDJOB where we used a language where you can turn array bounds checks on and off on a per-function basis I permanently turned them on for everything. After a couple years of debugging. Just not worth it omitting them.

The tests were nice but real customers came up with crazy data that still exposed edge cases. Mind you the data in that case was Turing-complete.
 
In cases where constexpr can't be evaluated at compile-time

Which is why the _unchecked functions exist. You could easily have some debug-only checking code surrounding your unsafe access to make sure the expensive checks only happen in debug builds. Then when you build release code, there are no expensive guards to worry about.

Code:
fn get_unchecked_prod(some_slice: &[u8], idx: usize) -> u8 {
    #[cfg(debug_assertions)]
    {
        println!("manual bounds check");
        // this bounds check only happens in debug
        if idx >= some_slice.len() { panic!("OOB"); }
        println!("Survived the bounds check");
    }
    unsafe { *some_slice.get_unchecked(idx) }
}

fn no_compiler_checks(some_slice: &[u8], idx: usize) -> u8 {
    // manual bounds check, so the compiler doesn't insert one
    // returns a u8 because it's copy, whereas
    // get{,_unchecked} explitly return refs
    if idx < some_slice.len() { return some_slice[idx]; }
    panic!("OOB");
}
 
Which is why the _unchecked functions exist. You could easily have some debug-only checking code surrounding your unsafe access to make sure the expensive checks only happen in debug builds. Then when you build release code, there are no expensive guards to worry about.
Though it didn't look like it in the example, I am assuming this can be a global toggle so all the many dependencies use the same unchecked array accesses?

(Kind of like the MSVC debug STL)
 
As a decades-long ASM coder, I took amusement from the hysteria over GOTO type statements.
At the ASM level conditional jumps are a fact of life.
I just want to get work done, and not have to fight with the arcane sections of the language.
I am comfortable in C, but have zero reason to use it.
YOU WILL BE VINDICATED!!!!
Spaghetti code does not matter if no human has to read it au naturel. And it won't with the onslaught of A"I".
I would argue that it is much easier for LLMs to translate for user or even refractor, the GOTOs, than abstractions of structural programming.
Conditional jumps are much more definitive. Loops in loops can easily mislead the LLM.
In few years nobody will be willing to pay Mentats pouring over code.
There might be a few in the dungeons of NSA trying to salvage the back-doors in Rust compiler - yeah, I am that suspicious of the sudden Rust hype, lead mostly (sorry if u not) by the nose rings, by the usual highly politicized suspects.
 
Though it didn't look like it in the example, I am assuming this can be a global toggle so all the many dependencies use the same unchecked array accesses?

(Kind of like the MSVC debug STL)
No, that isn't possible. The language is set up in a way that not getting the checks requires you to opt-out of the safety features. You could, however, use what's known as the newtype pattern to make this the default form of access in your code. Newtypes are explained in the docs if you want more in-depth info, but the gist of it is that a newtype is a minimal wrapper that allows you to add functionality on top of existing types without changing the behavior of the base type. And using traits like Deref (and the additional power of deref coercion), From, and Into, you can make newtypes work with outside code without issues.

It's worth noting that Rust spends a large amount of effort on its optimization passes to eliminate unnecessary cost where possible. For example this Medium article had to go through a fair bit of effort just to even get bounds checks on a Fibonacci sequence program. That willingness to optimize is also part of why you get to sidestep the safety features with unsafe. If you're certain that bounds checking hurts performance for no gain, then the few extra bytes of source code tell the compiler not to perform them there.

It's also worth repeating that I'm not an evangelist. I know it won't replace other languages altogether and I don't want it to. But I do think it's a much more approachable language to get working on low-level compiled code than C and C++ with all the footguns (feetgun?) they have. Not to mention it's very nice that the compiler screeches at me for things like use-after-free. Some of the resulting bugs are hard to track!
 
Well guys, after reading your posts,c and c++ disappear and FreeBSD will rewriten in rust...is time to let me grow the long beard and go to cabin in the woods,without cell phone and computers
Waiting for the normailzation of consumer computers to only accept rust kernels with a security excuse.
 
But I do think it's a much more approachable language to get working on low-level compiled code than C and C++ with all the footguns (feetgun?) they have.
The ability to shoot yourself in the foot when using C is a feature and not a fault. C doesn't hold you back from doing what you want to do. C doesn't prevent you from doing what you want to do. A language that holds you back is a fault, not a feature.
 
The ability to shoot yourself in the foot when using C is a feature and not a fault. C doesn't hold you back from doing what you want to do. C doesn't prevent you from doing what you want to do. A language that holds you back is a fault, not a feature.
Furthermore, C was invented because of pragmatism, not because the language inventors thought they were smarter than other programmers. Maybe that is another aspect of its success.
 
Rust merely prevents you from relying on UB and things that only work by accident.
Again, undefined behavior is a side effect of C allowing you to do anything you want. C won't restrict you in any way. Same with assembly language but you don't hear people talking about it there. It's why some people use other languages, they can't handle the truth! (Jack Nicholson said this about C in "A Few Good Men" I think.)
 
there's been like half a century of work in programming language theory since then
And the entirety of it relies on all the fancy new developments eventually talking C. As I said on The Case for Rust in Base it's more than a language at this point. A following is logically included with the seat of power it holds too. I suppose these people are stuck on "if everything needs to speak C, why not just stick to C" thanks to that half century.

undefined behavior is a side effect of C allowing you to do anything you want.
No, undefined behavior is a side effect of letting the compiler do whatever it wants because what you're doing is not something the language actually allows you to do, or that the design does not take into account. The HTML render of the comp.lang.c FAQ has a good description on Q11.3: the distinction between implementation-defined, unspecified, and undefined behaviors:
undefined: Anything at all can happen;the Standard imposes no requirements.The program may fail to compile,or it may execute incorrectly(either crashing or silently generating incorrect results),or it may fortuitously do exactly what the programmer intended.
If you're being serious about the line of thought that UB is just a feature making C a better language, then you may just as well put blind faith in LLM code; neither of them is going to do what you expect.
 
If you're being serious about the line of thought that UB is just a feature making C a better language, then you may just as well put blind faith in LLM code; neither of them is going to do what you expect.
What he’s saying is that C lets you do whatever including things that can not be specified because they’re not portable (hence left “undefined” in the standard). If you want to squeeze the most performance out using processor specific features you can often do so. It doesn’t make C a better language but a more powerful language in a way.

I’ll give you an example: tagged pointers, which can be very useful if you are writing an interpreter for some programming language. Since all pointers will be aligned on 8 byte boundary on a 64 bit word machine, you can use low order 3 bits to indicate ints, chars, symbols, string, list, etc., instead of allocating an extra word to hold the tag, which is what rust will likely force you to use as otherwise it will have conniptions if you try to extract the low 3 bits or & with ~7 before using it as a pointer to some object. A rustian might say why not just waste 8 bytes per object but now you may have to GC twice as frequently.
 
C came from a time where compilers needed maybe 7 passes to do anything (and then assemble), keeping this well defined was not easy then. Also, it was not required. They treated C as a portable assembler, statements pretty much had a 1:1 pattern in the code generator. Today, this is completely different. A lot of UB today in C was not UB then, because the compiler would never reach the places. You had to declare a variable as "register" by hand, and know what happens when you setjmp() or longjmp(). The LLVM project has a nice posting about how the order of optimization passes may alter the program in UB ways - the first C compiler had no optimizers.
Sure, a lot of research has been done on programming languages, but much of that was a Charlie Juliett. A lot of UB comes in from the runtime system and eco system. Using a well designed library is easy, writing a well designed library is hard. And piling crates upon crates will turn any well defined language into something unusable because the sand is shifting everywhere you want to build something. C++ is heading for the same cliff, the runtime is getting more complex by the minute, and when it becomes a full time job to track your tools, you don't get anything done.
 
things that can not be specified because they’re not portable
This is what implementation-defined and unspecified are for. There are plenty definitions that are left up to the implementation precisely because their behavior isn't portable and the usual fix is "let the implementor deal with this" and sometimes they'll even require an explanation with the implementation.
using processor specific features
That's what tools like inline assembly or feature detection and intrinsics are for.

They treated C as a portable assembler [...] A lot of UB today in C was not UB then
Yeah, those were simpler times for sure. That came with its own problems though, as not every compiler/target/host combination would agree on what the output should be. As more variables entered that equation the standard had to grow and adapt.
A lot of the named (and unnamed) UB now also just wasn't considered. There's a reason the spec included the catch-all that any behavior not described by the spec is implicitly undefined.
The LLVM project has a nice posting about how the order of optimization passes may alter the program in UB ways
This definitely makes for a great read. A lot of time and energy has been spent on making sure behavior is being upheld when optimizing, and I think the use of a more flexible Intermediate Representation is a very good way to reason about behavior before changing what's happening. On the other hand, that's also why LLVM will sometimes do very funky things when it encounters UB.

Strict aliasing comes to mind. Violating it allows for
Was this ever clearly defined behavior though? As far as I can tell with a cursory search, this was more "not explicitly mentioned" than "absolutely fine and behaves as expected."
 
Was this ever clearly defined behavior though?
Not really, then again the early C specs didn't define much as formally defined behavior. It was defined by a few vendor specific compilers (talking DOS era here) possibly as extensions (arguably similar to gcc's -fno-strict-aliasing). Interestingly some embedded allocators still use it as part of their implementation, since they don't need to claim to be portable.

The last large project I know of to violate it was Python 2 (PEP 3123).
 
idk why people venerate c, there's been like half a century of work in programming language theory since then
OK. Lemme answer your question from my POV.

At the end of the day a programming language must produce machine code that works and is optimized for a particular CPU. The C language very elegantly allows programmers to produce human-readable code that is easily translated into machine code, while they can visualize and infer what's going on under the hood. That is ultra-important to programmers who are producing things like OSs, embedded code, applied science apps, etc. (ie my fields of interest)

When heavy algorithmic or complex ideas need to be modeled C++ is still a good option (irrelevant of all the fluff added to it) because it is still easy to visualize the translation to low level concepts, given its roots in C.
 
bgavin do you use guards for checking what CPU is installed? I once had a tool where some cool haxor used inline assembly, for x86, without guards. Because all the world is a vax, you know? And it was only one instruction. Running that on a big endian machine created havoc. It was a file system tool, by the way. Since then I strongly oppose inline assembler and prefer to teach the compiler some tricks so I don't need inline assembler.
 
Back
Top