Favorite programming language(s)?

If I had to limit my choice to the languages listed in The University of Cambridge's Eureka magazine, on page 21, Volume 29, it lists the SPUDSAC I computer. There it states: "Most programmingis done in Spudsac Tomatocode, which is similar to ordinary language."
That ranks up there with BEDSOCS (a 1970's simulation language that ran on an HP 2100).


It appears that there was a later version written in C that ran on PDP11:

 
First, OO and functional are almost orthogonal concepts, you can perfectly combine them. Nobody ever said OO means mutable objects, for example.

Second, the "classic" OO (which combines object-orientation with a procedural paradigm) already allows to structure your code quite nicely, while not having to abstract from the real machine "too much". You can do OOP in machine code / assembly, if you like. You won't ever find a good way to do functional programming in machine code.

Disclaimer: No judgement whatsoever involved here.
 
I thought OO was the whole raison d'etre of C++; remember "C with classes?" So now it's a functional programming language?

Early C++ where you had to use OO to even do collections was absolutely terrible. Generic programming with templates is much better.

Templates are a purely functional programming language (at compile time), one where there is just one type - types. Still, they are the best you got in C++ to get work done safely.

That is why Lisp is so much better - you have a real, complete language at compile time - Lisp.
 
Would be interesting to see examples where "casting to/from void * is required in C++.

Type erasure. You don't want your template widget to get instantiated into a different type for every thing that it handles, so you store a `void*`. Then when you need to use it you cast it back. This should be safe as it's all done by the compiler which has all of the type information.
 
From type erasure to machine code. I did a very small amount of 6502 on my VIC-20. At university it was Z80 and 6809. As an intern at IBM, 80286 and the IBM proprietary GPU in their ImageAdapter/2 (that was nice). Then not that much asm for quite a long time. In the past few years working on Valgrind I've needed to write and above all read a lot of amd64, x86 and arm64 machine code and a bit of PPC. Haven't bothered much with MIPS. Looking on the horizon is RISC-V.
 
I get most satisfaction by writing in C. Bettering up C++ for graphics programming and for web development I like the direction that JS is going in, especially addition of TypeScript.
Other languages I have looked into hasn't felt right for me in their syntax. I need some level/extent of written code to memorize. Wanted to look into Lua at one point but haven't taken time for it yet. I focus on getting fully customized for C++ and JS before looking into any new.
 
I still write a whole lot of tools for Win32 with Delphi, and even more to interact with Excel using VBA.
Coding is a perishable skill, but I still have my ASM and other libraries.
Even though heavily commented and structured, I read one app I wrote in ASM, scratched my head and wondered WTF was I thinking???
 
The motorola cpu designs, especially 68k, were some of the best available at that time, I studied microprocessor engineering as part of my degree, and all the teaching was motorola-based. They started with 6800 family chips and then progressed to 68k, and they used vmebus a lot. It was a very good engineering platform. It's a long time since I have looked at 68k systems though, which is a shame as I liked working on it. Of course the PC boom started soon after that, and my area of work headed that way.

The secret - to anyone who wrote (guided) IBM [D]OS/3{6,7}0 systems code for sysgen and some drivers for 'odd' 6-bit +parity devices, '69 to '71 - is that not only the system architecture but opcode language is very familiar. I'd not be surprised if Motorola had licenced the codes, or even designs.

After which I went bush, next computer apart from a T.I. SR-52 prog.calc in '74 was a home-made kit Signetics 2650 @2MHz in '78 (2K eeprom, 4K static RAM, later + 2* 8K RAM.

68k assembler... yum :) I did an embedded project using an exormacs dev system once, vme card stuff. So much nicer to work with than the intel junk which was segmented at the time, all that near and far crap.

In '84 3 month $job, genuine Motorola VMEbus box, I heard $10K. Owners were idiots ( or spooks! ) but I had fun.

I still would like a 68060 or such build in modern processes. What would the clock rate be? Like, on GaAs? 68k assembler was one of the first things I learned, and I still like to read it. When messing with a compiler, I check if it can generate 68k code and use that to get familiar with the language. You can read that, not like x86.

Yep.

It seems that the 68k family morphed into ColdFire https://microapl.com/Porting/ColdFire/cf_68k_diffs.html . There is also the FreeScale DragonBall family https://en.wikipedia.org/wiki/Freescale_DragonBall that was based on 68k. I think I remember the PALM handhelds used dragonball cpus. AFAIK the 68k line pretty much diverged and ended with the death of motorola semi.

And I had a Dragon 32 (6809), Oric Atmos, ZX81 (Z80), QL (68008), Tosh MSX (Z80), Jupiter Ace (Z80). The ZX81 was the first computer I actually owned.

I designed a whole medical computer using the 68K on the VME bus. The engineering director had put together what was essentially an IBM PC clone to do the same work but mine was modular in that you could upgrade the processor, memory, IO, attached devices, etc. by just pulling one small half-size card out. It wound up being (now) Bausch & Lomb's top selling product and the architecture is still in use today (in a modern form).

I loved the 68K. I was enamored by National Semiconductor's NS32000, too, but that went nowhere.
Very interesting, it sounds like an impressive piece of work :) Presumably it had pluggable specialised medical instrumentation cards as well. I wonder if you used OS/9? I remember that used to be quite popular (in the UK, anyway) on 68k vmebus systems. If they're still selling it now, it would be interesting to know what 68k-descendent CPU they have used in the current product, presumably it still uses some variant of the 68k core.

The motorola cpu designs, especially 68k, were some of the best available at that time, I studied microprocessor engineering as part of my degree, and all the teaching was motorola-based. They started with 6800 family chips and then progressed to 68k, and they used vmebus a lot. It was a very good engineering platform. It's a long time since I have looked at 68k systems though, which is a shame as I liked working on it. Of course the PC boom started soon after that, and my area of work headed that way.

I remember briefly reading about NS32000, I didn't look at it in any detail though.

Might see you mob one more time, might not. Let's hear it for nostalgia ...
 
Currently I use or play with: C, Go, k, V, Scheme, python, sh
Used or played with: C++, perl, CL, PL/I, tcl, smalltalk, postscript, Fortran, Pascal, Logo, APL, rc
Wished I had used or played with more: Algol68, Haskell, Beta, Prolog, Mesa, Icon, Simula67, Concurrent Pascal, Occam
Never had a strong desire to play with: Java, Cobol, Ruby, D, Scala, nim, j, forth, dart, ocaml, c#
Might use them in future: lua, swift, rust

Of course, my not wanting to play with some languages simply means *I* don't care enough to spend time on them regardless of their other merits (or demerits). There are far too many languages but most languages play in the same field and just minor variations.
 
I've been learning about Common Lisp for a month or so. People often talk about Lisp macros, so I knew about that feature. There are some other novel things about it as well. Here are a few examples:
  • Variables, or "bindings" as they're called, are useful in themselves apart from whatever data they may store.
  • Error handling allows you to define, at the top of the call stack, how exceptions should be handled at the bottom where they occur. I haven't explored this yet.
  • Even though it's a high-level language, the programmer can be very particular about the low-level details such as the bit widths of data and whether functions destructively modify their arguments.
  • It's a compiled language, and yet you can add or update the source code while the executable runs.
This is the video that got me interested in Common Lisp:

View: https://www.youtube.com/watch?v=mbdXeRBbgDM&t=1150s
 
I've been learning about Common Lisp for a month or so. People often talk about Lisp macros, so I knew about that feature. There are some other novel things about it as well. Here are a few examples:
  • Variables, or "bindings" as they're called, are useful in themselves apart from whatever data they may store.
  • Error handling allows you to define, at the top of the call stack, how exceptions should be handled at the bottom where they occur. I haven't explored this yet.
  • Even though it's a high-level language, the programmer can be very particular about the low-level details such as the bit widths of data and whether functions destructively modify their arguments.
  • It's a compiled language, and yet you can add or update the source code while the executable runs.
This is the video that got me interested in Common Lisp:

View: https://www.youtube.com/watch?v=mbdXeRBbgDM&t=1150s

I worked with Chris on his Clasp implementation for a couple years.
 
I just finished a project in ASM: extracting the EXIF data from a JPG header.
Parts of the header are written Big-Endian, making ASM the ideal platform for little-endian conversion.
 
Currently I use or play with: C, Go, k, V, Scheme, python, sh
Used or played with: C++, perl, CL, PL/I, tcl, smalltalk, postscript, Fortran, Pascal, Logo, APL, rc
Wished I had used or played with more: Algol68, Haskell, Beta, Prolog, Mesa, Icon, Simula67, Concurrent Pascal, Occam
Never had a strong desire to play with: Java, Cobol, Ruby, D, Scala, nim, j, forth, dart, ocaml, c#
Might use them in future: lua, swift, rust

Of course, my not wanting to play with some languages simply means *I* don't care enough to spend time on them regardless of their other merits (or demerits). There are far too many languages but most languages play in the same field and just minor variations.
Nim & F# & Sbcl are really interesting. Try it out "hello world".
 
The fastest code is the one using the best algorithm for the purpose. Using a simple language that compiles more and less directly to machine code won't help you with your O(2^n) algorithm.
When I worked for General Motors, I had a GM Research Labs report about experiments in optimizing some heavily-used graphics code written in PL/I. They rewrote it in assembler and it got faster. Then they rewrote the assembler code in PL/I and it got even faster.

They repeated this process a few times, and each time the code got faster, because they understood it better and optimized the algorithm.

Something else to consider: the IBM mainframe frequently introduces new machine instructions with new models. Their compilers make note of the model they are running on and, when possible, use new instructions that let them generate more efficient code. Your programs get faster just by being recompiled. You can’t do that with hand-optimized assembly language code. Needless to say, you need the vast resources of someone like IBM in order to support compilers like that.
 
In the ‘70s, when report-generating languages like EASYTRIEVE, DYL260, and NOMAD were all the rage, a guy I worked with would tell people “Oh, I use BISMO10, and it works much better than those languages.“

BISMO10 was just a name he made up, but it let him one-up people in flame wars (which happened in person at conferences, since we didn’t have the Internet yet).
 
When I worked for General Motors, I had a GM Research Labs report about experiments in optimizing some heavily-used graphics code written in PL/I. They rewrote it in assembler and it got faster. Then they rewrote the assembler code in PL/I and it got even faster.
BTDT. The inner loop of the code that did the final simulation of my PhD thesis was about 200 lines of Fortran, running on a VAX. It ran slow. So I decided to optimize it. Since it did a lot of calculations on 3-vectors (sums, scalar products, angles with trig functions), I inlined the 3-vector calculations manually, fundamentally taking all the function calls and turning them into 3 lines of code. The routine ran MUCH faster, since the compiler could now use registers much more effectively, in particular keeping intermediate results there. The cost was that the code was much longer, about 3x. Then I noticed that the inner core had some complicated memory accesses, updating big 2-dimensional arrays (fundamentally multiple spatial maps). So I recoded that array-access part in assembly. It got much slower. How can that be?

So I read my assembly code, and compared it to the assembly generated by the fortran compiler. And I found that the compiler didn't have to do any calculations to figure out the pointer arithmetic for array indices. The pointers were always conveniently available in some register. In particular, the compiler never used multiply instructions or shift operations to construct pointers. What strong magic did it have? Very simple: It used the large register set to combine loop indices, array indices, and pointer addresses, and always have what it needs next ready in a register, without having to do any extra calculations. Remember, there were no RISC machines yet, so instructions ran slower.

So far, so frustrating. Then I had to take my code and run it on NeXT and Sun-3 workstations. That was a disaster, because they had floating point that was somewhere between slow and laughable. I benchmarked my code, and found that the big problem was calculating arc sine, which on those machines was extremely slow. So I replaced that with a pre-calculated lookup table of a few ten thousand values and linear interpolation, and things got massively faster. And when I back ported that lookup table to the VAX, it made the code slower.

After all that, I got a degree. And then I interviewed at DEC, and they were thinking to offer me a job in the Fortran compiler group, but I declined: Due to a 2-body problem, I could not work in the Boston area.
 
My rewrite story: I wrote a player for a 3d tic tac toe (played on a 4x4x4 cube). You take turns with the program. The first one to place 4 markers in a straight line wins. My program like most two player game programs used alpha-beta pruning to select where to place its next marker. I placed a time limit on it so that it had to pick the best move it found in that time. The first version was a naive implementation and used a 3D array (in Pascal) to represent cube's 64 positions. It was very slow so the move it picked in the given time limit was not very good and I always won. Then I realized that cube locations can be mapped to a pair of sets of 0..63 locations! In Pascal such sets can be implemented very efficiently (in C too). After the representation change the program was 100s of times faster and I never won! [This was on a PDP-10 model KA10, IIRC]
 
We once had something like coding olympics, you would get a problem and a deadline, fastest wins. Platform was C64.
The problem was "8 queens", the assembler team did not get why they had lost.
To the BASIC team.
When they came over they explained that their collision test was bound to be the fastest, they had used something like raycasting from one square in all 8 directions and check if a touched square was occupied. Then they were introduced to the fact that, if Q1 can hit Q2, then dx=0, dy=0, dx=dy or dx=-dy.

Having the smarter algorithm wins.

In my first job for money, I changed a sorting algorithm from O(n*ln(n)) to some other that ran in O(n). Where n is the number of polygons in topological geo data, for a country. Project lead comes in "You must have messed this up to no end, it can't be running so fast."
Split-Sort-Merge for avoiding to hit virtual memory was also a good idea.
 
I am still disgusted that in almost 30 years no-one ever made proper strides to get a C or C++ compiler to emit bytecode to target the JVM.

We had it for .NET IL, WebAssembly/ASM.js/JS, even Flash bytecode.

Actually several such C/C++ -> JVM compilers were developed.

Here's an example:

They aren't great, as the C/C++ memory model is more general than Java's, they have to emulate all memory by one huge int[] array. A pointer is just an integer, whose value is an index into that array. While security exploits like buffer overflows cannot inject code, they can corrupt memory within this array. Performance is pretty poor: if I remember correctly, about 50% of the speed of the same code rewritten as Java source.

There was a patch to GCC back in 1999 or so, that did something similar.
 
We once had something like coding olympics, you would get a problem and a deadline, fastest wins
Bleh, this gave me some flashback heebie-jeebies. We had this competition in high school. I was never fan of it. Even worse similar tasks had to be solved at uni, 1st year/semester during "programing course". I hated it. I know, just my very subjective take on it. Many people enjoyed it.

My biggest regret is that it took me too long to discover hacking competitions/CTFs. That I loved and wasted way too many sleepless nights. Taught me many things including patience and persistence.
 
They aren't great, as the C/C++ memory model is more general than Java's, they have to emulate all memory by one huge int[] array.
Ironically Emscripten (Javascript + ASM..js) isn't much different. Just one big (albeit typed) array.
I think WebAssembly does something different though.

It still looks to me that the C++->Java compilers never made it past the experimental stages.
 
Back
Top