C++ Virtues of a virtual override

I've fallen into the habit of attaching the override modifier to virtual methods in derived classes.

There's a very good reason for this.

Years ago, I added a parameter to a virual method in a base class. I edited dozens of header and source files to match. Unknown to me, I had missed one. It compiled. It ran. But now there was code calling the base class method instead of the derived method. It took a team of eight a full week to find my mistake.

With the override keyword, that would have been a compile error. Something instantly identified and fixed.
 
And auto conversion of parameters makes figuring out which function is called into a wonderful intellectual puzzle. But I don't get paid to solve intellectual puzzle.

For a while, we had a coding rule that any single-argument constructor has to have a second parameter called "dummy", to prevent auto-conversion, and that conversion operators were forbidden. Then the "explicit" keyword came out, and I spent a week removing all those second parameters.
 
Maybe the real problem is that C++ allows for distinct functions having the same name but differ only in parameter lists.
Reading this thread as C++ noob, isn't it supposed to be that way because of OOP polymorphism support? That was long ago for me. I never used it because it kind of targets group projects but for only 1 person, it's too much irrelevant code and structure.
 
MG if you saw the COBOL jokes earlier, function overloading is very much the opposite.

Consider the overloaded operator + that works on integers of different sizes both signed and unsigned, and floats, and doubles. If you've ever added two numbers in C or C++, you've used a small bit of polymorphism.
 
  • Thanks
Reactions: MG
Maybe the underlying problem in comp sci is that the focus has changed in newer generations from creating languages that encourage creativity, to restrictive languages that chase the latest "best programming paradigm", whatever that paradigm of the day happens to be. Soustroup himself was very noncommittal about the "correct use" of many of the C++ features, instead presenting them as generic features to use as you see fit. An example of this was the introduction of exceptions, to which he said "they are just another method of program control", not forcing them to fit the modern definition of exceptions.
 
Don't we also have function in base class defined with var args (I think "..." is the syntax) then override in child class. Kind of fun for compilers to warn on those usage.
But the "waning on autoconversion" is a good start
 
One thing I appreciate about override is that it makes the programmer’s intention explicit. Without it, a small change in a function signature can silently turn an intended override into an overload. With override, the compiler helps catch that immediately. It doesn’t limit flexibility — it just reduces ambiguity, especially in larger code bases.
 
Reading this thread as C++ noob, isn't it supposed to be that way because of OOP polymorphism support? That was long ago for me. I never used it because it kind of targets group projects but for only 1 person, it's too much irrelevant code and structure.
No, function overloading by parameter type is not polymorphism. Polymorphism is when the same function (with the same parameters) is implemented different in different derived classes. For example: The base class Animal has a function move_to(location) that is not implemented. The derived class Fish implements it by internally using the swim(location) function; the derived class Cow implements it by walking, and the derived class Bird implements it by calling fly().

Overloading would be if the class Elephant has 4 different functions all called eat(), one which takes a Salad object, another takes a Hay object, another takes an int and a float (the int is internally treated as an enum, the float as a quantity in kg), and the last one a Weight object (which internally can contain a weight in kg or lbs), and which eats whatever is available. The Weight class in turn has a conversion constructor that takes a float argument and defaults to kg. Now tell me, what does eat(3.14159) do? If the compiler can figure it out, a human can too, but it turns into a puzzle.
 
One thing I appreciate about override is that it makes the programmer’s intention explicit. Without it, a small change in a function signature can silently turn an intended override into an overload. With override, the compiler helps catch that immediately. It doesn’t limit flexibility — it just reduces ambiguity, especially in larger code bases.
Explicit is good, especially when it forces the tools to do work. But one could argue "this is coding standard" others "why won't the tool figure out what I want"

Reducing ambiguity is good.
 
Function parameter overloading takes care of the following API syntax

Code:
struct user * find_by_name(const char *name)
struct user * find_by_id(unsigned id)

into

Code:
user * find(const char *name)
user * find(unsigned id)

I think the way GCC handled it long ago was to create an indexed function, like find_1 and find_2 and just retarget the find() in code call into appropriate one, by signature. It also makes debugging straight.
 
There is a slight quirk here with inheritance.

Code:
virtual void Employee::jump();
void Employee::jump(float _height);

// Manager inherits Employee
void Manager::jump();

You can't now do:
Code:
Manager m;
m.jump(9.0f);

Weird huh? When a derived class declares a function with the same name as a base class function, it hides all base class overloads with that name, not just the one with the same signature.
 
Function parameter overloading takes care of the following API syntax

Code:
struct user * find_by_name(const char *name)
struct user * find_by_id(unsigned id)

into

Code:
user * find(const char *name)
user * find(unsigned id)

I think the way GCC handled it long ago was to create an indexed function, like find_1 and find_2 and just retarget the find() in code call into appropriate one, by signature. It also makes debugging straight.
This is certainly convenient for the programmer, but the price which comes with this “feature” is high, IMHO, because now it is the compiler who has to create the unique function names. And it does so in a vendor specific way. The result is an ABI that is unusable for interfacing other languages, which is why you
have to declare these interfaces as extern "C".

I know of no bugs which are avoided by this “feature”, so it is basically programmer convenience that results in a broken ABI.

This is why I termed it as “problematic” further above.
 
Function parameter overloading takes care of the following API syntax

Code:
struct user * find_by_name(const char *name)
struct user * find_by_id(unsigned id)

into

Code:
user * find(const char *name)
user * find(unsigned id)

I think the way GCC handled it long ago was to create an indexed function, like find_1 and find_2 and just retarget the find() in code call into appropriate one, by signature. It also makes debugging straight.
And that opens all kinds of problems. For example, say that I use "find(IdName)" versus "find(IdNumber)", where IdName and IdNumber are custom classes (probably thin wrappers around string and integer). Obviously, for convenience I have a constructor IdNumber(int). And now I create, somewhere in the code, a conversion constructor IdNumber(IdName) (which for example does a database lookup on the name "Alice Bob" into the integer), and another one for IdNumber(const char*), which converts strings like "123" into ID numbers. Suddenly, the compiler has to decide which of the conversions to use, if and how it can turn the call find("123") into something useful. This leads to "butterfly flapped its wing" bugs. It may even cause a compile error, if the (insanely complex!) rules for which conversion wins ends in a tie.

For this reason, I'm in favor of ABSOLUTELY NO AUTOCONVERSION. In my opinion, if someone says "int pi_rounded = 3.14159", that should give a compile error. If you want to round or truncate a float to an int, then bloody say so explicitly! While this is a nice theory, in existing C/C++ code bases, this would lead to murder and mayhem, since conversion between unsigned and signed would all have to be done explicitly.
 
There is a slight quirk here with inheritance.
What's that called? Contravarient inheritance and covariant return types, or something like that?

Get back to the example I gave above, with an Animal base class having a move_to(location) function. Great. Except that for the cow, the argument is of type "array of two numbers", because cows can't fly or dive, so they only need X and Y coordinate. For Fish, the argument has to be "array of three numbers", and Z has to be <= 0. For Bird, the argument has to also be "array of three numbers", and Z has to be >= 0, unless the bird is a puffin (where all Z values are legal), or a penguin (more like a fish). Last I checked, C++ did not support contravariant arguments, so if you specify a new function move_to(3-vector) it is overloading, not inheritance, and it becomes super difficult to figure out which function is called.
 
Back
Top