Using Python for conceptual learning or jump straight into C?

I don't think the theoretical approach is very helpful.

Theory helps a lot. When you want to go somewhere, you need a map. Theory is that map.

The problem with academic stuff is that the people who provide it don't have to use it in production.

The best way to train developers is to use real-world languages and tools, provide theory so they have the map, and expose them to real-world problems so they understand theory through practice and grow their problem solving skills and self-confidence. And at the end of the training, trainees will leave with a real-world application they have built themselves.

My customers return, so I guess they liked that way of doing. ;)
 
I will study theory when I need it. I'm currently writing a piece of software in C and I need to sort a doubly linked list. First I will try to come up with my own algorithm. If it works, great, I can optimize it later should I have the time, the interest, or discover my program is too slow.

Spending days or weeks studying a dozen different sorting algorithms only succeeds is scarying everyone away from programming. Start with practical challenges, the theory will come later.
 
Developers in 2020 have way much more to learn than in 1988, when I started. This means trainers have to divide the learning path in small, as independent as possible sections.

Because the OP is his own trainer, I'd recommend he:

- Learns C, because what motivates him is to write device drivers.
- Starts with simple programs that have nothing to do with drivers, just to practice and understand.
- Has a look at the source code for existing, simple device drivers: /dev/null and /dev/zero, for instance.
- Reads Kong's book
- Chooses one of his own needs as a first project.
- Looks at what others have already done in this field and find out why it doesn't work (or apply) in his case.
- Imagines solutions and experiment with them.

Be aware that many device drivers need a good understanding of the digital circuitry they interact with.
Depending on your prior background, this may on may not be a problem.
However, problems are also occasions for further learning and, depending on your personality, this may or may not be exciting. ;)

Even if you don't finally come up with a viable device driver, you'll have learned quite a lot of interesting things and, even more important, you'll have learned about yourself - what you like and don't like in this new field, and what kind of exciting uses you'll be able to find to your new knowledge and skills. :)
 
Spending days or weeks studying a dozen different sorting algorithms only succeeds is scarying everyone away from programming.

But you don't need to do that!

You only need a coarse-grained map of the Land of Software Development so you always know where you are and are able to decide appropriately where you want or need to go next, and are able to get there quick and safe.

Finer-grained theory is only needed in very specific cases.
 
I couldn't agree more, 20-100-2fe. Your recommendation is exactly how I would start too.

Reads Kong's book
Do you mean K.N. King's book C Programming: A modern approach? I started reading this book but there are two big downsides:
  1. It's not "modern" anymore, e.g. it does not cover intn_t and uintn_t, though they are (I believe) widely used nowadays.
  2. Even second-hand the book costs a fortune ($100 USD) on Amazon. On the German Amazon the book is about $50 USD so depending where you live the book just may be too expensive. And I prefer studying from a real book rather than a PDF (which I could not find either).
 
You only need a coarse-grained map of the Land of Software Development so you always know where you are and are able to decide appropriately where you want or need to go next, and are able to get there quick and safe.

Every developer instinctively builds his own over time.
In fact, everyone instinctively does this in every field of life, it is a natural way of making sense out of complexity.
A good idea for a newcomer in the development field is to deliberately build his to support his learning.
You can use mind mapping for this, at least at the beginning.
You place the first thing you learn in a bubble in the middle of a sheet of paper, then you draw lines to other bubbles as the need to dig other related subjects arises.
You can also weigh each subject's relative importance - in your context - so as to organize/prioritize your learning.
 
Do you also have a recommendation for learning the basics of C? I started with K. N. King's book and I'm now reading Head first C but other good resources are always welcome.
 
Do you also have a recommendation for learning the basics of C? I started with K. N. King's book and I'm now reading Head first C but other good resources are always welcome.

There are several conceptually distinct areas in learning C:

- The language in itself (I mean: reserved words, syntax, data types, expressions, control structures) is simple
- The division between interface (.h files) and implementation (.c files), and C pre-processor macros
- The standard C library
- How to build a program, starting with a single hello.c (immediate, working result), then extending to a 2 .c + 1 .h project, which is enough to learn about compiling and link editing
- How to structure larger projects (make and makefiles)
- What are static libraries and shared objects, and how to build them

This list is the beginning of your own map of the Land of Software Development. It consists in the 'core learning', you'll add much more to it depending on:

- The functional area in which you want to use C (e.g. systems programming, GUI development),
- The design of your application (e.g. software architecture, technical architecture),
- The interactions of your application with the rest of the information system (e.g. connectivity and interfacing, deployment, operations)
- And the context of your work (e.g. open source project, agile team).

I'd say any book will do as long as it suits your taste, and no single book has to cover all aspects.
Besides books, you can also learn from open source projects, both as a source of inspiration and as a starting point for experimentation.

The standard C library is documented in man pages and you have the header files on your system, so all you need on your desk is a 'cheat sheet' with all available functions grouped by theme.

For the rest, you're already aware of the importance of practice, so it shouldn't be a problem. What is really difficult when you start learning something is to figure out what you need to learn: you don't know what you don't know before you learn it... Hence the list at the beginning of this post.

You may also need to learn later about the GNU tools collection, used in a great many open source projects, and about CMake and meson, also widely used.
 
Theory helps a lot. When you want to go somewhere, you need a map. Theory is that map.

This is horrible. It brings up pictures of Zombies helplessly trying to follow the paths they used to tread in their lifes.

In the instance: if I want to go somewhere, I need the coordinates of position and target, then I can use my own brains to see how to overcome obstacles. Much more fun that way. (But I noticed that the cherry trees in our vicinity are no longer plucked from. That is because they are not recorded in Google maps, so people do not notice them anymore.)

In software: the language is fully defined; there is not even something unknown that would need a map or such. (If you need a good algorithm for something, well, just look out for one, understand it and use it. Don't need "theory" for that.)
When I go into the FreeBSD code, I never experience that I would suffer a lack of theory to understand it. But what I almost always experience is that there is a huge lot of conventions (like how a device driver or a kernel module is connected into the whole), and one needs to understand these to do something. One would indeed need a kind of map to understand them, but these are project (=FreeBSD) specific things, so "theory" will not help much.

The problem with academic stuff is that the people who provide it don't have to use it in production.

Well, then we might just have a naming discrepancy. The stuff that you can use in production - I don't call that "theory", I call it best practice. There I agree, the research, study and understanding of best practices is most important. (And in contrast to theory -which is considered abstract and mostly static- the best practices can be continuously improved.)

Over all, we have far too many people strolling around and only trying to follow maps (of doubtful quality): when I go to the doctor due to some ailment, he doesn't treat me anymore. Instead, he looks up my symptoms in some database to find the diagnosis, then he looks up the diagnosis in some other databse to find the "allowed" treatments for it. I could do that myself (or let a robot do it), don't need a doctor for that. (Obviousely, those databases are created in the interest of the healthcare business in order to maximise revenue.) Damn, we have a real problem. :(

Stop believing in maps. Stop believing at all. Think for Yourself.

Addendum: the practical things You describe afterwards (mind maps, structured learning, etc.), I completely agree with these.
 
Well, then we might just have a naming discrepancy.

Likely. From what you explain in your post, I think we're talking about different aspects of coding and learning to code, and with different words. This happens all the time in online conversations, and just a little bit less in real-life conversations - including discussions occuring in the course of any IT project. ;)
 
I think we're talking about different aspects of coding and learning to code, and with different words.

I think coding and learning to code is one thing, to find the right algorithm that one will code other.

People can be very eloquent and speak best English, but unable to speak other think than nonsense.

Even when programming trivialities like a program for parsing something one notes how theory helps, a lot.
 
I also found out that people with no prior development experience have less trouble learning an object-oriented language than former mainframe developers.

I had big problems to understand what object oriented programming is and for what it is good, I found it abstract inflation that makes programming stranger to the machine and less efficient.

At the end, reading about TclOO and javascript, I understood that it has a sense for encapsulating and for avoiding to deal with pointers, for making big programs better readable and easier to deal to many developers, but in spite of it I never wrote an object oriented program and perhaps I will never do it.

I learned programming with the pseudo machine language of an old pocket calculator, I always think about "goto" even if all are hidden in "while" statements. I think it is an error to begin with something else than the algorithmic paradigm, to begin with object oriented, functional or declarative paradigm. One must get first a feeling of how the CPU works. With other ways you are not teaching programming, but bloating.
 
I learned that the second edition of the K&R book uses the first ANSI standard, so it isn't all that bad. Original K&R syntax would be completely unacceptable nowadays.
Still, C89 is not "till now standard", it was replaced by C99, C11, C18. I would consider C11 sufficiently up to date, as C18 are only some error corrections.
 
I'm not spreading confusion. C has evolved since the K&R book so there are better resources to learn C from.

You are definitively spreading confusion. The minimal changes after C89 that could introduce few insignificant and easy avoidable incompatibilities or the few new features added do not make K&R worse for learning, even not a little worse, they are insignificant for judging if the book is good, better, worse or bad.
 
I learned programming with the pseudo machine language of an old pocket calculator, I always think about "goto" even if all are hidden in "while" statements.

Dijkstra would like to have a word with you.
"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."
 
I learned programming with the pseudo machine language of an old pocket calculator, I always think about "goto" even if all are hidden in "while" statements.

For C, I am not entirely against goto's for memory management (kind of like poor mans RAII). The following code gives good examples of gotos in practice.

https://github.com/openbsd/src/blob/master/sys/dev/ata/wd.c

What I really don't like is a bunch of checks that repeat the same cleanup code and early exit. It is so easy to miss one when maintaining / expanding the code a few years later.

Gotos are not nearly as confusing as success / fail callbacks that Javascript developers seem to think constitutes "asynchronous code".
 
I had big problems to understand what object oriented programming is and for what it is good

Too bad you haven't found a good explanation.

Basically, software development is a translation activity: you have to understand what your users want to do and translate it into a language a computer can understand, using concepts it can understand too.

This is absolutely not trivial and the purpose of object-oriented languages is to offer developers a computer language that eases this process by raising the conceptual level that can be used in the translation process.

Object-oriented programming is intended for complex domains (i.e. in which many objects interact) and is particularly appropriate for software used by humans.

I never wrote an object oriented program and perhaps I will never do it.

Depending on your functional domain, you may never need to, for it wouldn't bring you significant benefits.
If you write device drivers, for instance, you don't need OOP.

One must get first a feeling of how the CPU works. With other ways you are not teaching programming, but bloating.

If you write device drivers or embedded software, you need that knowledge. You also need a very strong background in electronics. In such situations, using anything else than assembly language, C or ADA is not helpful because it creates too much distance between the concepts of the programming language and those of the problem domain.

Technology is like a medicine: when using one, you must weigh its benefits and risks, just like with a chemical drug. Depending on the problem domain (= the disease + patient combination), you will choose different technologies (= medicines). There is no "one size fits all".
 
Dijkstra would like to have a word with you.

That was about the discussion structured vs. goto programming. A product of the emerging structured paradigm at that time. Today a non sense discussion.

My point is something different: everything you program ends as a goto machine program. Some languages translate more direct than other. I think C is a good compromise between higher level language and machine language, you can more easily imagine how the compiler treats your code.
 
Dijkstra would like to have a word with you.
"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."
Dijkstra is a
dick
.
I work with many excellent programmers who grew up on some form of BASIC; it's generally what piqued their interest in programming.
 
Back
Top