From lisp to python.

Chicken-scheme & Typed-racket are not bad choises in my "personal" opinion.
[Currently my thing is coalton. Lisp with an ocaml-flavor. ]
 
I dunno. I want full performance without language overhead. Full compilation to machine code, declarations for optimizations through the roof, and real threads for multi-core. Only Common Lisp offers that in the Lisp world.
 
Talk about typing delay. Whats the round trip time for a keypress to a target 150 million miles away?
"Let me just connect gdb to the target for real time debugging..."

I agree with cracauer@ wanting compilation to machine code. Interpreted languages have their place, but performance isn't it. Yes lots depends on how good the compiler is, but interpreted you depend on the "runtime" environment it runs in. Java: has a JIT to get the performance.
 
One can't even write a decend native gui in java (nor in ada).
In Ada i promise memory leaks & in java it's very "basic".
Java is for the web. The JIT is fast but there is also an "external world".
 
Well, I don't think the JIT thing quite worked out yet. Java still seems to have a reputation for being bloated and slow.
 
In a previous job I worked with tomcat. The application-server was dying, why, due to the high-load the java-garbage-collector did not receive time. Moral of the story, make sure the garbage-collecter recieves cpu-time and certainly under high load.
 
In a previous job I worked with tomcat. The application-server was dying, why, due to the high-load the java-garbage-collector did not receive time. Moral of the story, make sure the garbage-collecter recieves cpu-time and certainly under high load.

Well, the SBCL GC runs synchronously on allocation, so this won't happen unless you actually use more heap than you gave it.

The Java GC can run in that mode, too.
 
Well, I don't think the JIT thing quite worked out yet. Java still seems to have a reputation for being bloated and slow.
Anyone remembers FX!32? The x86 to Alpha translator? As a start, they tried AXP->AXP translation, and the result was faster than the original binaries. The JIT trace optimizer can optimize along calls/across library boundaries and still improve over the C compiler. Also, a byte code machine fitting into the code cache can reduce memory bandwidth for code fetches and leave more for data. See Transmeta. It is not Byte code/jit making things slow. Throwing in a camper van and a kitchen sink does.
 
Little anecdote about "common preconceptions about the speed of code tend to be wrong". Many decades ago, around 1995, I worked on a project to port a large code base from C++ to Java. We had lots of CPU-intensive image processing in our code, but also lots of script-driven stuff and multithreaded machine control. We were running on Pentiums, under Windows. The image processing group was worried that Java would be too slow. So we prepared a bakeoff between 4 different environments, running the same code: an image 3-way correlation algorithm that was quite literally hand-written in both idiomatic C++ and Java, doing exactly the same operations.
  • Microsoft Visual C++ compiler, because that's sort of the universal default tool when doing development on Windows. Everyone knew it was crappy, but it was common.
  • Waterloo C/C++ compiler, because "everyone knows" it creates the tightest code on x86.
  • Symantec JIT Java runtime, again because it's sort of the industry standard, and JITs were a new thing and rumored to help somewhat with the common wisdom that Java is dead slow.
  • A Java to native code compiler that was being developed by a startup, and that we were alpha testing. They claimed that we could get the speed of C++ with the safety of Java.
The results were: Fastest was the Symantec JIT, followed by Visual C++. Waterloo C++ and the startup Java compiler were much slower.

What does this prove? A lot of things. For example: Performance benchmarking is hard; performance engineering is even harder. Preconceptions and common wisdom are often wrong, but sometimes right. Don't believe anything until you have measured it. And don't extrapolate from one measurement to a general rule.
 
What i notice is "the way something is written" can be more determinant for speed than the choice of language in which it is written.
 
What i notice is "the way something is written" can be more determinant for speed than the choice of language in which it is written.

Kinda, mostly. Writing fast code is not trivial in any language.

I will draw the line at threads that can use multiple cores/CPUs and those that cannot. The typical scripting languages like Python and Ruby cannot use more CPUs from their threads. C, C++, Common Lisp, Java, Go, Rust can. That is a hard barrier if you have many cores available, and really limiting to the scripting languages.
 
I guess this kind of migration is happening everywhere now. Yes a lot more libraries, and probably the crucial reason as always is the highest number of new grads who have been taught using python. They want a plentiful supply of fresh meat.

Python seems to be the new VB. I still hate the whitespace thing though, its hard for me to take python seriously. Apparantly people working in AI use it.
 
Back
Top