Favorite programming language(s)?

C++ exceptions and C setjump/longjump would be hard to implement, and cause a huge performance penalty.
Yep, though due to the similarities between CSharp and Java and their architecture, this complexity does seem to have been overcome via Microsoft's C++/clr:safe product.

I think Emscripten takes a massive hit with exception handling and has them off by default.

(To clarify, I meant C++ -> JVM Bytecode compilers. Not Java transpiler).
 
Having the smarter algorithm wins.
Except when it doesn't.

I took some computer science classes about 40 years ago. We were taught to always look for the best algorithm, and the classic didactic example was sorting: Obviously O(n log n) algorithms such as quicksort or merge sort are the best, and you can provably not do better. And quicksort is better than merge sort, because it has less memory overhead. So 40 years ago, I learned: quicksort wins, because of the genius of algorithm.

Well, that turns out to be wrong. For small data sets (n = 5 or 10), simple O(n^2) algorithms are actually faster than quicksort, because quicksort is pretty complicated, so the constant out front if several times larger for quicksort. Also, some of those simple algorithms (in particular insertion sort) have much better memory access behavior, and since today in rough numbers L1 stack is 10x faster than L2, which is 10x faster than L3, which is 10x faster than main memory, any sorting algorithm that can stay mostly in cache works better, even if its ultimate performance for large data sets sucks. So today, a good sorting algorithm may be implemented as follows: If length(list) == 0 or 1, return, because it is already sorted. Else if length(list) < 10, run insertion sort. Else run quicksort or merge sort.

The next thing that surprises one is this: In the real world, many input data sets to sort are already completely sorted. Or they are very close to sorted, with perhaps just two elements switched somewhere, or perhaps they are actually the concatenation of multiple sorted data sets, so they have very few places where they are mis-sorted. So before running the actual sort, run a single linear O(n) pass that counts how many out-of-order entries there are, and whether they could be fixed by just swapping two elements. If that count is very low, then don't run a generic sort algorithm, but either bubble sort, or a linear merge pass that starts at the break points.

Finally, whether for example quicksort is faster than merge sort on modern real-world machines depends crucially on how the sort is implemented, and whether swapping and comparison cause cache misses or are non-sequential in memory accesses or not. And this is where it changes from science to black magic and benchmarking.
 
I use Shell sort (Knuth) for most of what I need sorted, both for simple types as well as record types.

Some time back, I wrote a routine to calculate N factorial of a string of characters.
The intent was decoding the Jumble32 puzzle in the newspaper.
"palindromes" has 11 characters and 39,916,800 possible permutations of 11!

At first, I wanted to sort the array of permutations to cull duplicates.
None of my sorts provided the fast turnaround time I required.

To avoid the lengthy sort time, I then tested applying many grammar rules to each of the 39M entries in the array.
Such as eliminating all known consonant pairs that don't exist in English.
This proved to be an even bigger time sink than sorting.

In the end, I left the array unsorted.
The authoritative comparison is the Scrabble dictionary, which is loaded into memory as a sorted array.
I then created 26 indexes into the dictionary with Start and End index for ranges "Axxx", "Bxxx" and so forth.

I wrote a binary search of each permutation of "palindromes" into the indexed range of the Scrabble dictionary.
This required 279,496 comparisons to find a match.
Total execution time is 0.51 seconds, far far faster than sorting the permutation array just to cull duplicates.
 
For the opposite end of crazy sort performance, look on the web for TeraSort. It is typically sorting of a terabyte (10 billion records, each 100 bytes long), and typically done on a cluster computer. My (unfortunately now departed) colleague Jim held the world record in this several times, getting it done in about 10 minutes. Took him several weeks each year to update and polish his benchmark program.
 
Back in the day, there were discussions about whether "real" programmers coded in FORTRAN, or assembler, or whatever. There was a humorous article, "The story of Mel" (https://www.pbm.com/~lindahl/mel.html), about a guy who programed a drum-memory-based computer in machine code. Because the drum is moving during the execution of the instruction, the instructions can't be sequential, they must be spaced out (with builtin jumps) so that when the computer is ready to execute the next instruction, it is just coming up to the read head. There are special assemblers that take care of this. (For example, the IBM 650 Magnetic-Drum Data Processing Machine had SOAP, the Symbolic Optimal Assembly Program.) But this guy didn't trust the assembler to do its job, so he preferred to write machine code, and put the instructions where he wanted. To implement delays, he would put the instruction so that when it was time to execute it, it has just passed the read head, so the drum would have to make another revolution. He called this the "most pessimum" placement of the instruction, as opposed to the optimum placement.

This got me thinking about one of my early programs, when I was learning FORTRAN in high school. (Our math teacher let a few of us run our programs on the school district's IBM 1401.) It could very well be considered a most pessimum algorithm. It calculated square roots by starting with one and squaring it (resulting in 1, of course), and comparing it with the number whose square root was desired. If it was larger, it would subtract 0.1 and try again; if it was smaller, it would add 0.1 and try again. After that, it would repeat the procedure with 0.01, then 0.001, etc.

Now, IBM 1401s were never considered speed demons, but my program took about eight seconds per square root it calculated. I think one would be hard-pressed to find a slower algorithm. (At the time, I was more interested in programming, than in math. Otherwise, I would have looked up the algorithms people use when calculating square roots by hand, and would have had a much faster program.)
 
Better algorithms are nice. But to argue that a slower language might be a good choice because it is easier to implement better algorithms has pitfalls. A language like Python is about 80 times slower than an overhead-free language. It takes quite a bit of better algorithms to make up for that.

Programs with smart algorithms can also be more vulnerable to tomorrows input data not meeting today's predictions. Brute-force programs are easier to performance-diagnose.
 
The tldr is c, c++, and assembly (if only it were more portable, but still study it some). Haven't looked into rust much but feel language portability and language stability is likely a bit limited in its current state. Got mom to let me get our old atari130xe out of the attic we used to use before I could even read. Started teaching myself programming in the 4th grade (now I sound smart) to be able to cheat/control videogames (welp, credibility gone now). Mostly just did atari basic with a little exposure to peek/poke to read/write memory but didn't understand the assembly calls behind it. We already had a number of atari books that covered programming in atari basic.

I remember (probably in 4th grade) speaking to an advanced math student who tried to start explaining algebra basics to me but I got lost with words like "variable", then went home and continued using variables in code I was reading/writing. In junior high school (or was it elementary) I found a couple programming books in the library. Looking at both I quickly found mistakes in their code and like a bad student does I wrote in a library book; corrections in case anyone ever is trying to learn from the flawed examples.

In junior high school I had bought my own ti83 graphing calculator with monies from saving; parents wouldn't get it for me as they thought I just wanted it to play games but I wanted it as a nice calculator and programmable portable pocket computer. I had already messed with gwbasic on Dad's 8088 and probably touched some qbasic. In high school it got stolen my first year and I replaced it with a ti86. With the calculator it was on to learning TI basic. TI basic required working with normally downtalked programming techniques just to write code properly: gotos were required and I had to have some lines at code at destinations to properly close off open if tests, loops, etc. after I had used a goto to jump away from it. I also left off closing quotes, parenthesis, etc. whenever possible as the interpreter autocloses them when interpreting and all those bytes add up quick on a small amount of memory the calculator used to both store+run programs. If memory fills up too much it also runs slower and eventually crashes; a common issue I saw from other coders jumping out of flow control without ever ending it.

My goal was writing more efficient programs and gaining more control of the machine. By 2nd year in high school I was allowed to take programming. They had 3 classes, each 1 year long: pascal > c/c++ > java and each becoming more advanced. My main interest was c/c++ as a martial arts classmate from elementary school days recommended I get a compiler and learn that language years before for my goals of speed and control. Unfortunately at least at the start they cannot focus on my goals but had to start somewhere. After completing pascal they changed the curriculum making c/c++ the beginning class. I then took their beginning class a 2nd time but in c/c++; kids couldn't keep up with some of those basics so there was no chance the instructor could teach me about pointers, linked lists, etc. Don't remember why but I didn't end up doing programming my 4th year; maybe got back into band or took another class that it conflicted with probably. If I recall, I spent monies on the class so my high school programming would transfer to a nearby college as credits on that study. I had picked up a c/c++ and x86 assembly book and did some self study but don't feel I really learned a lot unlike teaching myself basic from books on atari. I got a TI89 at some point (3rd year?) but only did a little programming with it.

On to college, the school I went to was a community college that didn't accept my credits of programming and made me start over with the basic 1st class. This time it was java. I had interest in learning it too but didn't know a reason to want it more than c/c++. Not only was it a basic programming class (again), the instructor went so far as to respond to the idea of writing more efficient code with saying, "just throw more hardware at it to run it faster". The class ended up using a proprietary library providing many basics like console input/output which comes with the book; would have to throw away a bit of knowledge or carry that library with me to continue java after the class. I also learned both compiling and running the language were slow (this was around 2002-ish) which seemed counterproductive to the idea of writing more efficient programs. I think it was 2nd year college I got exposed to HP graphing calculators and got some of them.

I looked at the further curriculum from the school (a little dabble in c/c++ and assemmbly would come later but not for x86. Things weren't looking better by the time I'd leave community college and go on to a main college nearby it was preparing me for. Most of it remained java focused. After that first college class I ended up leaving the computer science or whatever the degree was they had me headed toward.

I've done only a little to further study c/c++ and assembly wtih a more recent goal being to learn c without the ++ as personal study. I seem very slow and bad at learning it now; maybe its me, maybe the books, and maybe its just that much harder but I'd guess its a mix of the first two as I haven't done much with it beyond what I did with basic so many years before.

After all that, c, c++, and assembly are about all I really have any motivation to read & learn. I'' try to finish K&R C then look more at Modern C (seems to have good knowledge but bad suggested tasks for self study).

As a side note, I wish C compilers would be able to return a more optimized version of the code instead of only putting the optimization into the executable so that I could learn from it, tweak it, and minimize compiler effort to do it on repeat runs.
 
Better algorithms are nice. But to argue that a slower language might be a good choice because it is easier to implement better algorithms has pitfalls.
I would also argue that C's access to direct blocks of memory makes it easier actually possible to implement better algorithms.
 
I would also argue that C's access to direct blocks of memory makes it easier actually possible to implement better algorithms.

Yeah, that is another can of worms. If you have to translate "C data" (raw data) then your performance is going to tank.

One reason why SBCL-compiled Common Lisp can compete with C (mostly) is that it can read raw C structs directly without converting or tagging that data.

Either way, if you have to translate that data you have brain overhead, as you say, and that can get in the way of the supposed advantages of higher level languages. Non-trivial amounts of data to be fed into slow languages are a problem. That is also why Tensorflow has a C++ interface in addition to the main Python one.
 
As a side note, I wish C compilers would be able to return a more optimized version of the code instead of only putting the optimization into the executable so that I could learn from it, tweak it, and minimize compiler effort to do it on repeat runs.
No you don't. You won't be able to make head or tails out of it, most of the time, and in extreme cases it will be akin to read from strange tomes featured in the works of HP Lovecraft. You won't be able to change anything.
 
No you don't. You won't be able to make head or tails out of it, most of the time, and in extreme cases it will be akin to read from strange tomes featured in the works of HP Lovecraft. You won't be able to change anything.

It would be terrible. Imagine something that resembles typical Javascript!

Either that or it would just be a big load of duplicated (hard coded) values from loop unrolling. Again, like most Javascript in the wild ;)
 
No you don't. You won't be able to make head or tails out of it, most of the time, and in extreme cases it will be akin to read from strange tomes featured in the works of HP Lovecraft. You won't be able to change anything.
Me making heads or tails of it is a separate issue from having it as an option to learn from. It would be more of the idea of being able to have the feedback of "this function was inlined: it takes more memory but now runs faster" or "this mathematical expression with constants and their operators has been replaced with the result of those operations instead of calculating it each run" with the difference of seeing it instead of just getting a comment about it. Compilers do things that I know of and other things that I do not know of. Sometimes optimizations take quite a bit of time so it would be nice to know which ones did or did not benefit the output instead of blindly applying all of them. Probably just a workflow issue of my own but it seems debugging compiler optimizations would be simpler when at least some are expressible in the original language. There are readability, maintainability, and expandability reasons to have certain optimizations applied to code instead of writing the code with the optimizations but there is plenty of code where such optimization can be easier to manage and read.
 
Mirror176 you have two options here. Compile with debug info and objdump -dS, showing you the source the instructions came from. Then increase the optimization and see what changes, or you go for a research compiler framework. These often have a "dump as C" pass.
 
Agree with Crivens here. If you want to debug at that level, looking at the generated assembly code is pretty much a must. And sometimes not even sufficient: you also have to look at how those instructions are scheduled, and how many clock cycles they take. I think I posted the anecdote above: I'm a pretty experienced assembly programmer (or at least I was, 30 years ago), I replaced a piece of code with assembly, and it got slower. Because the compiler was better able to keep intermediate values in registers, use simpler (faster) instructions, reason about data flows. There is a reason low-level programmers tend to have the "architecture manual" for the CPUs they're using on a book shelf right at their desk.
 
Was my first programming language available for me in the 1980s on my Amstrad PC1512 with MS-DOS 3.2
I tried to program a 'Defender of the Crown' clone within. Several thousands lines of "IF...GOTO..." ina single source file made me give up one day - I lost both overview and grip completely ? Worst spaghetti code ever. (I was 14 and had no education on programming whatsoever.)

There seem to be still interpreters (compilers?), but no port on FreBSD as far as I checked (Ain't gwbasic propietary by Microsoft?) And users using it. And as I read it was developed further.
Anyhow unless one has lots of self-written code I don't get an idea what to do with it.

But okay, last week I learned some actually write 'OSs' in Brainfuck. Well,... - if they want to.
 
Back
Top