Where to find 'Real' programmers online?

In many cases, if a function is fully successful, you don't want cleanup. You return the newly allocated memory / resource as well as the other allocations underpinning it. Whereas a defer would incorrectly clean it up, even on success leading to a dangling pointer.

So defer could be nice... but for it to be really useful, it would been to be a "cancellable defer". And that is rare in programming languages.
Defer is useful for dealing with temporary stuff. e.g "sharedVar.Lock(); defer sharedVar.Unlock()". Not when you want an effect to persist past a function return. If for example you must return 3 open files from a function or none in case of error, don't use defer (in case the first 2 open succeeded and 3rd one failed).
 
Several decades ago, I was working on a language translator for a DSL that described electrical wiring diagrams of vehicles. It was based on the XPL compiler from “A Compiler Generator”, by McKeeman, Horning, and Wortman. The DSL was described by a bunch of BNF rules, which were used to generate parsing tables. When the compiler encountered a syntax error, it didn’t have a good way of dealing with it; all it knew was that none of the rules applied. The default behavior was to assume that there was an erroneous token and delete it. It usually ended up deleting the rest of the program. This was particularly bad, because you could batch up several compiles, and it would delete the subsequent programs.

There was a divider line that separated the different programs in the batch. If the compiler encountered a divider line while it was deleting erroneous tokens, I had it do a GOTO to a global label, to code that would restart the compiler, because it was too difficult to unwind the state of things when it was several subroutines deep. (It could have been done, but the architecture I was dealing with did not make it easy.)

It felt really slimy, not only doing a GOTO, but a GOTO to a global label, two sins at once. (Glenford J. Myers would have been so disappointed in me.)

But it fixed the problem, and I didn’t have to completely rewrite the compiler framework.
 
It's funny, we keep talking about GOTO and ignoring the C++ elephant in the room: exceptions. I personally LOVE exception, but again, only when used correctly.
I'm a big fan of exceptions. And its often annoying that a lot of "safety" subsets always tend to think that the language is magically safer without them (sometimes it is but not always).

That said, I have seen some crappy parsers throw exceptions as part of their normal success flow or to escape nested loops. That stuff is pretty gross!

But why does every minor feature in C++ have to have a 33 page explanation?
C++ is the favored language of pedantic language laywers

Agreed. The C++ community tends to be extremely noisy, with ideas pulling in so many different directions. The language is so (overly)complex these days, maybe it *does* need every feature to be painstakingly considered? I have a little talk here that I am potentially quite terrified of the questions. I have spoken at software engineering conferences but never a C++ focused one and I am sure the experts there will pick apart good chunks of the proposed tool / library :).
 
The key is predictability. When GOTOs are used without discipline, the code path can go anywhere. For example, I have seen code that GOTOs into the middle of a DO loop, because they want to use some of the code in there, then has an IF statement that jumps out if they are not doing iterations. (That's why they invented subroutines, BTW.) Dijkstra preferred languages that helped you not shoot yourself in the foot.

I spent about 35 years coding in mainframe assembly language. I liked to pretend I was a compiler: my loops and IF statements were always coded the same way, so I always knew where the code was going to exit. (That is a problem with, for example, doing GOTOs to jump out of loops. If I am testing, I want to be able to put a breakpoint at the end of a loop, and know that it will always hit that breakpoint however it exits the loop. [Sometimes that means I have to set a flag to indicate how the loop was exited, and then use an IF statement on the flag to figure out what to do next. But this is better then jumping out of the loop, because I know it will always hit my breakpoint.])

I should probably add that I seem to be one of a very few people who do not like the various "structured programming" macros that are available. I use the standard constructs, but I generate them by hand. Why? The structured programming macros abstract you from the details, but you end up coding something like IF (A, GT, B, CH); The details have been hidden, but then you have to tell it to use a Compare Halfword instruction, because the fields are halfwords. Now you are down in the details again. To me, it's like when pilots are flying a plane with the autopilot, and suddenly it has a problem and turns control back to the pilot. The pilots have not been paying attention, and now they have to fly the plane, and they don't have a good feel for where they are, or what has been going on. (BTW, I always felt that the macros ought to be able to figure out which compare instruction to use, based on how the fields were designed. )

As always, YMMV.
 
I'm a big fan of exceptions.
So am I. They allow writing code much more clearly, because correct error raising and handling becomes either very concise (just say throw Exception('foo bar wen wrong')), or even invisible (most functions don't need try/catch blocks, because most exceptions filter way up the stack).

EXCEPT: (ha ha, pun)

In languages where memory management and move semantics is explicit (the C family, Rust), exceptions create havoc of having to get out of lots of blocks, having to run destructors, and making sure everything is destroyed/released/unallocated/closed/... correctly. In the standard C++ RAII idiom where we rely on destructors, that seems to work if you don't look carefully. The problem arises when the act of destroying/closing/releasing can itself fail, and raise more exceptions. That's why one of the coding rules is "never allow a destructor to throw". But what if the destructor has to perform an action that can actually fail, such as closing a file, which in turn requires writing the last buffered content first? C++ can not handle that in a reasonable and human-readable fashion.

Second, this only works well if exceptions are used solely for things that are truly exceptional (ha ha, another pun), and don't need to be handled, except at very few places (ideally 1 in a single-threaded program). Actions that commonly fail and need good diagnostics or error recovery (like retries) shall not use exceptions, which mostly excludes using them for file or disk IOs (file systems can be full, disks will have errors) or network communication (networks are famously fickle). If you use them for these common cases, the code will we littered with try/catch blocks. Honestly, that's not a big deal, since using an error return and if/else block is just as bad.

But where I really get upset is when exceptions are used for common and expected situations. My favorite (anti-favorite?) example is that Python uses an exception for iterators to signal that they have finished. This means that my pre-conception (if I see try/catch or try/except blocks there must be error handling in there) is wrong, and I can not longer visually distinguish normal flow from bad situations.

Still, on balance exceptions are better than the alternatives, albeit not perfect.
 
It's funny, we keep talking about GOTO and ignoring the C++ elephant in the room: exceptions. I personally LOVE exception, but again, only when used correctly.

Just don't use exceptions for regular flow control. Apart from bad style it is also horribly slow to actually throw exceptions (as opposed to just setting them up). This is also unique to C++, the fact that unthrown exceptions are very fast.

Code:
       1.2 nsec/call        1.2 user        0.0 sys: atoi
      75.1 nsec/call       75.1 user        0.0 sys: snprintf
     153.4 nsec/call      153.4 user        0.0 sys: snprintf_float
     134.8 nsec/call      134.8 user        0.0 sys: fnmatch
      20.1 nsec/call       20.1 user        0.0 sys: condvar_signal
      15.7 nsec/call       15.7 user        0.0 sys: mutex_lock_unlock
       6.0 nsec/call        6.0 user        0.0 sys: pthread_mutex_trylock
      26.8 nsec/call       26.8 user        0.0 sys: gettimeofday
     195.9 nsec/call      195.9 user        0.0 sys: strncpy
       5.3 nsec/call        5.3 user        0.0 sys: strchr
     216.1 nsec/call       15.5 user      201.8 sys: getrusage
      90.7 nsec/call       23.1 user       67.1 sys: read
     142.4 nsec/call       38.9 user      103.5 sys: read1bdevzero
     241.8 nsec/call       32.9 user      208.9 sys: read8kdevzero
      56.9 usec/call      298.4 user    56554.5 sys: read2mdevzero
       4.1 nsec/call        4.1 user        0.0 sys: rand
       2.7 nsec/call        2.7 user        0.0 sys: random
       5.1 nsec/call        5.1 user        0.0 sys: floatrand
    5729.5 nsec/call     5720.0 user        0.0 sys: cpp_testhrow_throw_48
    5362.4 nsec/call     5373.1 user        0.0 sys: cpp_testhrow_throw_24
    5244.3 nsec/call     5236.3 user        0.0 sys: cpp_testhrow_throw_12
    5100.3 nsec/call     5094.5 user        0.0 sys: cpp_testhrow_throw_4
    5055.9 nsec/call     5063.4 user        0.0 sys: cpp_testhrow_throw
       6.0 nsec/call        6.0 user        0.0 sys: cpp_testhrow_no_throw
       4.8 nsec/call        4.8 user        0.0 sys: cpp_testhrow_no_possible_throw
       2.2 nsec/call        2.2 user        0.0 sys: cpp_testhrow_no_cleanup_no_throw
       1.2 nsec/call        1.2 user        0.0 sys: cpp_testhrow_no_cleanup_no_possible_throw
From https://github.com/cracauer/ulmbenchmarks

So, actually throwing an exception in C++ is 1000x slower than not throwing it.

This is clang on FreeBSD, gcc is not significantly different.
 
That said, I have seen some crappy parsers throw exceptions as part of their normal success flow or to escape nested loops. That stuff is pretty gross!
I'm not sure how common knowledge it is, but Stroutrup himself defined exceptions as "just another generic mechanism of program execution", not for the explicit and exclusive use of catching errors/exceptions. It seems that the restrictions on their more generic use has grown out of several generations of "best practices".
 
Just don't use exceptions for regular flow control. Apart from bad style it is also horribly slow to actually throw exceptions (as opposed to just setting them up). This is also unique to C++, the fact that unthrown exceptions are very fast.
Well, I mean that does make sense, and is expected, if you are thinking like a full stack computer scientist. Of course exceptions are going to be "expensive"
 
Agreed. The C++ community tends to be extremely noisy, with ideas pulling in so many different directions. The language is so (overly)complex these days, maybe it *does* need every feature to be painstakingly considered? I have a little talk here that I am potentially quite terrified of the questions. I have spoken at software engineering conferences but never a C++ focused one.

My beef with C++ is that the amount of writings you have to consume to just use many of there features. Obviously for this you don't read the lawyerisms. The point being that the purely practical side is large.
 
I'm not sure how common knowledge it is, but Stroutrup himself defined exceptions as "just another generic mechanism of program execution", not for the explicit and exclusive use of catching errors/exceptions. It seems that the restrictions on their more generic use has grown out of several generations of "best practices".
Interesting. I recall in one of his books(?) he mentioned that using an exception to escape a deeply nested loop was "cute" but then went on to discuss other approaches to doing so. I have no idea which though... It was certainly an earlier one.

Makes sense. "catch" is quite generic, rather than try/fail or try/fault which I recall seeing in the C++/clr generated IL much later on once exceptions for error handling only became a more established idea.
 
hus is the shame. It goes to how they teach computer science these days.
There are Universities that begin / began with a functional languages.

I am not computer scientist, I learned first to program the pocket calculator, later FORTRAN,
took a lecture on computer architecture (real computer, without microprocessor). It is a different curriculum.
 
The reason dijkstra didn't like goto is because it makes formal verification very difficult, not because goto is inherently evil.
 
Programming in Fortran all day sounds like a kind of punishment.
Well, no. Quite the opposite.
Fortran is actually way easier to learn and use compared to most languages, including C or C++. And that without losing any functionality, other than shooting your own feet - that, you can't do in modern Fortran (no crazy suicidal things with pointers, no header hell, no crap). And the executable is as fast as it gets.

You probably mix Fortran with FORTRAN - a common mistake many people do. FORTRAN (the old versions of the language, up to version 77) was fine for its day, and the only sane way to do scientific computing at the time. Today, you do that with the modern version of the language, Fortran. You get excellent array features built-in, native complex arithmetic, functional or object-oriented programming, best module implementation ever, fast compiling time, and much more - you name it, it has it. Not to mention the huge library of mathematical solvers/tools available; your SciPy, R, Matlab, Scilab, Octave programs actually uses it in the background.
Kind of hard to call all that a punishment. I would actually call programming in C or, even worse, C++, C#, Java, etc a punishment in general, and a torture especially in scientific computing.

For me, it's pretty simple: if the program is a low-level library, a driver, or an operating system, C it is. If not, Fortran is the way to go - well, unless you like being tortured for some weird reason. I like Assembly too, but let's be honest, it's not really needed nowadays; still fun though, for old CPUs, such as the 6502 or Z80.
 
I had once the chance to work with Fortran on a Cray, and the debugger was absolute gold. Old f77 and before were really constrained when it came to flexibility, as there was no dynamic memory to have. You just make the arrays big enough. How it is today? I would need to check.
 
Might programmers hang out on VRChat? It seems like there'd be an overlap with VR, custom worlds/code with VRChat, and being able to freely express yourself with avatars
 
Well.......back in my day..........
I didn't hang out anywhere. Had my head buried in the docs, online articles about new stuff and blogs. The FreeBSD mailing list and on this forum. I often talked to the W3C and WHATWG people directly cause I did web stuff. One could tell when I was busy when I didn't show up here often though I come here any time I need a break.

When I'm programming, I don't want distractions among personalities with arguments and conjecture. I just want cold, hard facts. Of course, opinions get mixed in all that so one has to put up with that to get to the end but you have to be careful about where you go.

I think it's only been four years since I shut my web dev company down and I'd have to think hard to remember any forums I went to.
 
I had once the chance to work with Fortran on a Cray, and the debugger was absolute gold. Old f77 and before were really constrained when it came to flexibility, as there was no dynamic memory to have. You just make the arrays big enough. How it is today? I would need to check.
Gone are the times you had to make the arrays "big enough". Starting with Fortran 90, arrays of any rank can be allocatable. Not only that, but they are also automatically deallocated when out of scope (same for pointer arrays). No need to manually deallocate dynamically allocated arrays, unless they are defined in the main program.
This is not C; memory leak is not an easy thing.
 
Gone are the times you had to make the arrays "big enough".
In the '70s, there was a civil engineering package called COGO ("COördinate GeOmetry"). In order to do large, dynamically allocatable storage, it took advantage of a quirk of the IBM FORTRAN E compiler. In the compiled code, there was one register that always pointed to the COMMON area. By defining a small COMMON area, then having an assembler subroutine that allocated a huge area of storage and put it address in the register, they could have as much storage as they wanted.

Unfortunately, the FORTRAN E compiler was the only one that had that quirk, and tying themself to the FORTRAN E compiler was a limitation. (IBM mainframe compilers used to have a letter that indicated how much main memory the compiler needed to run, which was important in the pre-virtual-storage days. With larger size came more features. The FORTRAN G compiler was pretty much the standard. FORTRAN H was an optimizing compiler, and had some mods that had been contributed by Stanford Linear Accelerator Center [SLAC].)

While I was refreshing my memory about the FORTRAN H compiler, I ran into this interesting statement in this document from SLAC: |

"...the H compiler itself is compiled in Fortran H..."

I did not know that.
 
not often that we see someone properly using the diaeresis!
When I was growing up in the sixties, it was common to see a diaeresis in words like coördinate, coöperate, and reëntrant. These days The New Yorker is about the only mainstream publication that still uses them. I think they perform a useful service, so I continue to use them. (I am pretty sure that were it not for the diaeresis, I would have grown up pronouncing reëntrant as reen-trant.)
 
Back
Top