Object Oriented Programming really an Advantage

I think, one thing is OO programming style as you described it, concentrating on objects, other the OOP languages. The languages are the tools. Perhaps this "sub-typing polymorphism" could also be achieved in C, although its type strength may be an obstacle.
Absolutely can be done in C. Qt, GTK a few other "frameworks" do just that. Generic "GObject" is a structure, "my new object" extends GObject so GObject* and MyObject* work.

bgavin "maybe, sometimes". Often winds up at exception processing, but nominally, Objects have a "vtable" pointers to functions. Base class provides functions (interfaces) A(), B() and C(). Child classes (inheritance) can override these, typically in function, not in arguments. Arguments create a different interface. Vtable is a function pointer table so one level of indirection.

Going by memory, it's been a while since explicit testing:

Where it becomes interesting:
Base class A, provides interfaces X(), and Y().

Child class B, inherits from A.
If B does not have interfaces X() and Y() with the same parameters as in class A, the vtable for B has function pointer for X() pointing at A.X().
Now if B redefines (overloads) X() the vtable for B kind of has 2 entries. One pointing to A.X(), the other pointing at B.X() That can make a difference, based on how it's called.

Q = new B() create Q as class B
class A* R = &Q R is a pointer to an object of class A, since B inherits from A you can assign R to address of Q.

calling R->X() winds up calling X() as defined in class A because we are calling it through a pointer to class A. So at runtime (or compile time) the function dereference is A.X() not B.X() Toss in multiple inheritence, gets convoluted especially if different parents have same interfaces.

Inheritence gets interesting, especially multiple parents.
Big difference between "is a" and "has a". My experience is "have a single is a" and multiple "has a" relationships to actually make it work.
 
Is there a performance penalty inherent in OOP complexity?

Not necessarily. OOP in C++ is covered by its zero-cost abstraction promise for it. Virtual functions with runtime dispatch are a different matter of course. But the alternative to those would be chains of if-statements so it is hard to say whether that represent a slowdown due to OO. Multiple inheritance is zero-cost.

In Common Lisp you could have similar zero-cost OOP, but in practice the popular OO system CLOS rarely reaches that. That's a reason why I rarely used it.
 
Not necessarily. OOP in C++ is covered by its zero-cost abstraction promise for it. Virtual functions with runtime dispatch are a different matter of course. But the alternative to those would be chains of if-statements so it is hard to say whether that represent a slowdown due to OO. Multiple inheritance is zero-cost.

In Common Lisp you could have similar zero-cost OOP, but in practice the popular OO system CLOS rarely reaches that. That's a reason why I rarely used it.
While OOP is (in theory) zero-cost at the implementation level, it tends to promote designs which are not efficient concering, e.g., cache usage.

For example, OOP designs tend to put a state into an individual object and allocate heap storage for that object, while it might be beneficial to put all the state into one big array and just perfom the respective functions on it (leveraging cache locality).

This is an example, where a paradigm (OOP) leads to inefficient code, because of the way the programmer thinks.
 
Only interesting paragraph for me:

Abstraction​

Abstraction is the whole point, the origin core motivation for every HOL.
One may say basic concepts (paradigm) are different ways to abstract.

If you say something like
Perhaps OO languages offer ways to structure data in memory?
to me it shows you are trying to grasp certain concepts by freeing it from the abstraction.
That's neither wrong, nor right. That's the hard way :cool:

The whole point of abstraction, of HOLs is: Not to think about, not to care about how something actually exists in memory, nor how it's being processed by the CPU. But to solve a job (writing a program for a certain task) on a higher level of abstraction: To care about less details on certain things, for to think more of the higher assignments to be realized.
So as different paradigms were delevoped, to have more, and different ways to engage different problems in computers, also different languages were developed, realizing certain choices of paradigms, aiming to be better usable for certain jobs, or programmers demands.
That's exactly the problem many have with assembler. Assembler is completely free of any abstraction whatsoever. When you first learned programming with a HOL you are used to have at least some abstraction, and that makes it hard to break everything down to most basic machinecode, and hexvalues for data, and being confronted by the need to first think of and design some basic 'abstraction' you can build on.
On the other hand, if you are versed in assembler, you always may think of how things may look like in the machine.
While this may help on certain debugging issues it counteracts the idea of abstraction.

Several issues coming with that are other topics. To grasp how certain things actually work in the machine - or not.
Or how certain paradigms are realized in certain languages. Or if a language was under-, or overcared for, became 'obsolete', or bloated, or got on the wrong track.
Plus the always subsisting points of fashion, and hypes, which sometimes may even become religious. Like in the 1990s when C++ became popular, and everything had to programmed in C++, and OO exclusively, only. And anybody objecting to this was a stupid enemy of progress from the stone ages, not knowing shit about computers at all.
Instead of simply see:
different paradigms, and different languages are just different tools for different problems, just to be chosen right for each job individually.
While nobody would get the idea to open a bottle of wine with a hammer, just because hammers were the hype. 😁

So, bottom line:
try to grasp the concepts of different paradigms as you learn different languages:
add them to your personal toolbox, or not, and use them, if they suit, or not. 🥸
 
Sure, the theory looks nice. I do understand that it may have advantages, encapsulating code, making some order,
but I have the feeling it is an exaggeration. If I have to program, I have no idea how to begin using that.

Are eventually pointers to structures in C not enough?
In 95% of cases (even 98%) "pointers to structures in C" are enough.
 
On the other hand, if you are versed in assembler, you always may think of how things may look like in the machine.
While this may help on certain debugging issues it counteracts the idea of abstraction.
Well my first experience programming was with the pseudo machine language of old programmable calculators, then FORTRAN IV. In spite of never again programming with that, it remained something like my mother tongue, I program C like FORTRAN. The other low level source was just one course on computer architecture, with Hill Peterson's book, perhaps from the end of the 1970s, no microprocessor, a variant of APL used for circuit design. I do understand and like abstraction, it is definitively necessary, but for me it loses its sense when one loses the connection to the ground.
 
I do understand and like abstraction, it is definitively necessary, but for me it loses its sense when one loses the connection to the ground.
Without any abstraction this was programming in machine languages, only. Not to imagine where we were today if we had nothing else but assembler languages, only.
However, a downside of abstraction also is, one has to grasp the ideas of how others understand concepts. And besides not everybody can put oneself equally easy into different ways of thinking of others, the higher the abstraction level, the higher the hurdle needs to be taken.
And besides not all paradigms are useful for every job - better say: best choice, I believe not all paradigms need to be fully grasp by every programmer - depends on what one is going to do. But of course, if you wanna job on some C++ project you cannot say:"That's all garbage! Let's do it in Fortran!" 😂
(Edit: Stupid by me, since there already is an OO dialect of Fortran.)

To me this book was good to grasp the idea of OO:
Object-Oriented Programming with ANSI C
But of course there are many others (some already have been listed here.)
 
I like the way Golang took a pragmatic rather than dogmatic approach to it. It's classless, so it favours composition over inheritance.

Go's interfaces are not as nice as Haskell's type classes. At one point I had the idea of proposing specifying axioms as part of a Go interface but I refrained as doing it right would require some thought. The axioms put constrains on behavior of interface functions but that may require more compile/link time machinery. Example for a Stack interface:

- pop(stk, push(stk, x)) == x
- if empty(stk) pop(stk) => error

etc. IMHO this is better than writing unit tests. But one would need to experiment fair bit to see if this is worth the effort.

I used C++ for years. Initially I liked it but it is a terrible OO language IMHO. In my view inheritance should captures the idea of related behavior not implementation.
 
I do not code anymore, but I used OOP back in the day.

I started using OOP 35 years ago with DOS and Turbo Pascal 5.5. Object-Oriented Programming (OOP) greatly helped me in developing monolithic applications by enabling abstraction, generalization, and specialization. When inheritance and polymorphism are applied correctly, they allow you to effectively model real-world objects and manipulate them in a structured way.

For example, you can have a general object - a vehicle.
And more specialized objects like a car, a bus, a motorcycle inherit from the vehicle and then override some methods to do it a little bit differently than their ancestor.

Btw, the basic OOP constructs are
  • Objects are like data structures, but with behavior (functions/methods) attached.
  • Classes are the "blueprints" for those objects.
  • Inheritance and polymorphism let you reuse and adapt behavior.
Let's do some example ...

class Vehicle {
wheels: int

move() {
print("The vehicle moves forward.")
}
}

Now, we can create specialized versions:

class Car inherits Vehicle {
move() {
print("The car drives on the road.")
}
}

class Bus inherits Vehicle {
move() {
print("The bus transports many passengers.")
}
}

class Motorcycle inherits Vehicle {
move() {
print("The motorcycle zooms quickly!")
}
}

Notice:
  • Each child class inherits from Vehicle.
  • Each one overrides the move() method to describe its own behavior.
  • This is polymorphism: the same move() call behaves differently depending on the actual type of object.
The key takeaway is:
  • With OOP, you don’t need to write separate code for each case.
  • You can write code that works with the general type (Vehicle), and it will automatically adapt to the specific case (Car, Bus, Motorcycle).
For example:

function startJourney(vehicle: Vehicle) {
vehicle.move()
}

startJourney(new Car()) # prints "The car drives on the road."
startJourney(new Bus()) # prints "The bus transports many passengers."
startJourney(new Motorcycle()) # prints "The motorcycle zooms quickly!"

You can also have an array of vehicles, where array members can be any inherited objects, like a car, a bus, and a motorcycle.
This is called polymorphism.

To recap in plain words:
  1. A class is like a recipe or blueprint.
  2. An object is the actual cake baked from that recipe.
  3. Inheritance means a new recipe can reuse an old one, but add or change some steps.
  4. Polymorphism means different objects can respond to the same message in their own way.
Our Vehicle ---> Car/Bus/Motorcycle example is a good example of these basic OOP principles.

IMHO, OOP is good for application software and especially monolithic ones, and it is useless for system programming, like kernel development, drivers, or other system services in userland.

The modern microservices (FaaS) approach leverages REST API to call external functions. This is also a different programming concept and it is closer to functional programming with small functions. Event, it can be used for FaaS, IMHO OOP does not align with this concept.

As others already mentioned, OOP is one of many tools/methodologies, and if you get the OOP concept properly, you can leverage it for the development of application software and simplify complex problems. It is most useful for large monolithic application software leveraging OOP frameworks, which are object-oriented libraries.

Hope this helps.
 
Car & Plane -> Batmobile
Even multiple inheritance diamond graphs have their use; I once managed to get a team of really good computer scientists (not programmers, everyone in the group has a CS PhD) to accept using it in a large and production-ready C++ code base.

But multiple inheritance is definitely rare, and can be somewhat dangerous. And the diamond graph in C++ is so dangerous that it should only be used with great care, lots of comments in the source code, and everyone in the group being "read into" it.
 
I studied Java and Object-Oriented programming in college over 20 years ago, and already back then, I was made aware of the downsides of Java: Bloated (need a JVM and tons of RAM to run it), often unnecessarily complicated, and slow. Yeah, it was supposed to be the original 'memory safe' language.

But trying to keep proper track of all the objects that were instantiated, it was about as complicated as getting a handle on the dependencies in the Ports Collec
About 878 pages, only for starting? No, I will not read that!
Lazy much? Most of the pages are source code that demonstrates the concept. It wouldn't hurt to do a few exercises and run the stuff and solve problems. I spent first two years of my college career learning Java and how to use it to problem-solve. Had to buy 4 textbooks worth in aggregate 2000 pages. And I still have those books. C++ scared me away, though.
 
I have just learned a little of Javascript, some principles, what objects are, prototypes, inheritance, constructors.
I played a little with node.js

I also read once about the tcl extension tclOO for object oriented programming, but forgot everything.

Sure, the theory looks nice. I do understand that it may have advantages, encapsulating code, making some order,
but I have the feeling it is an exaggeration. If I have to program, I have no idea how to begin using that.

Are eventually pointers to structures in C not enough?

What is your opinion / experience?
I strive to write code in OO fashion whenever I can, for two reasons:
1) it encourages re-use and better pre-coding analysis, at least the way I do it
2) it better matches how my neuro-divergent mind works

Granted, writing an interrupt driver in OO doesn't make sense, but then ISRs are generally very short and very optimized.
I program in python for rapid application prototyping, C++ for professional projects (when possible), C for embedded RTOS stuff, and occasionally some java when I need a platform neutral GUI to access a networked database...and I've forgotten more languages than most programmers can comprehend.
 
Back
Top