ZIG programming language.

Lets just say that the first development environment for quantum computers will very likely be the equivalent of assemblers.
 
Oh yes, skimming through this thread again, I'm kind of amazed it evolved into a "complete" discussion of languages and interoperability o_O Still I want to add a few remarks...

Let's start randomly with "C# is written in C". C#, in my interpretation, is the language, so when talking about it as a software product, one can only mean the compiler, and then: no, it isn't. See https://github.com/dotnet/roslyn ... no C in there. In fact, for a compiled language, the compiler should always be written in the language itself, otherwise you can't take it too seriously. Of course, the .NET runtime (that supports any language targeting .NET) has quite some C code ... because it must interface with the OS, and ultimately, this means using C APIs. :oops:

Which kind of leads to my next remark: What matters most for interoperability is ABIs. C doesn't specify an ABI, because it's ultimately a platform-dependent thing ... but all C ABIs have one thing in common: they completely lack any meta-data, a function in a library is found by a symbol that's a simple "name" (typically just same as the C function name, sometimes with an underscore prepended). You must exactly know which arguments the function expects and returns, this isn't encoded anywhere in the ABI. If there are custom types used (like actual structs, not just pointers to them), you must know these. And that's where the problems start. When your language uses exclusively a different ABI (nowadays, this often means some meta-info like e.g. types is included), you need quite fat "bindings". When your language can directly use the simple C ABI of the target platform, you still need some simpler glue (probably like the zig example above in this thread). The only way to get along without any glue is indeed if your language also understands C, so it can directly use the information from a C header file. For C++, this is the case. C++ comes with different problems, e.g there can be many function by the same name (overloading), so to get a unique "symbol name", some "mangling" is needed, and this isn't standardized. Furthermore, C++ classes are meant to be public (fields can be private, but the structure is publicly known), so, when using that directly (without hacks like something called "pimpl"), you will break the ABI even when just adding a private field (because, see above, library and consumer must share knowledge about used types in advance) ... and so on. I won't tell all the reasons why I dislike C++ here, that would be even more off-topic :cool:

Anyways, this leads me now to my final remark. Someone complained there were once a lot more languages designed and implemented. Well, do you like a world with no interoperability and the need to rewrite anything every other year in some different language? Cause that was basically the case back then. (Anyone remembers e.g. Modula-2?) I don't think we see a lack of innovation here. What we see is after great success of C, a language being formally standardized, which enabled a lot more interoperability and stability. And I would expect any language that could ever replace C to be standardized as well, and have several competing implementations. Sure there is a downside, see above. The ABIs used with C are simplistic. And as long as OS interfaces are at their lowest level C (yes, that's even the case on Windows, although you need some masochistic tendencies to use their "win32" API directly), we're talking about "bindings" (or, full C compatibility).
 
Indeed. In some ways when compiled, that isn't a program any more than a picture of a cat is. What needs to run it is the .NET VM which certainly has a lot of C.

I.e a C compiler written in Python isn't really that interesting. It is the "thing" that gets generated and runs on the hardware/OS which is the interesting part. What C compilers generate is much more powerful than bytecode.

So C is more important than "just a language". It is the fact it singlehandedly underpins the entire computing industry which is the cool part. Really though, it is the machine code generated that does this. But almost all other languages and compilers fail to emit this. Which I think is weird. Why does Microsoft not carry forward their AOT compiler and make C# a real language rather than a glorified text-file emitter?
 
That isn't a program any more than a picture of a cat is. What needs to run it is the .NET VM which certainly has a lot of C.
Well, again ... it's the compiler, and that's the only thing that implements the language, and of course, it's entirely written in C# (there's vb.net in this repo as well because it also contains the vb.net compiler).

A runtime is a whole different story. Our libc can only be written completely in C because the OS API/ABI it needs to use happens to be C (and that's not even entirely true, I'm pretty sure at the syscall level, you need some assembly glue!). If you'd need a libc for an OS written in C# (and, seriously, this indeed existed as a lab project @ms), you'd need to include C# code in it.
 
Well, again ... it's the compiler, and that's the only thing that implements the language, and of course, it's entirely written in C#
With a suitable VM, my picture of a cat could also implement the C# language in almost exactly the same way as Roslyn.
(there's vb.net in this repo as well because it also contains the vb.net compiler).
VB.NET actually started out life from the early C# compiler.
 
Wait, your concern here is that the .NET runtime comes with a virtual machine? I'd say this is completely irrelevant in this context (except for the fact it makes using a C ABI directly impossible because this ABI is always defined in terms of the real machine the code runs on). I mean, you could certainly build a .NET-machine if you really wanted (there was once this thing called LISP-machine...), it just makes no sense nowadays, virtualizing is much simpler and cheaper.

edit, and then, if you really see an issue with that, just take the example of rust instead which DOES compile to "native" machine-code and still, the rust compiler is (of course) entirely written in rust.
 
make C# a real language rather than a glorified text-file emitter
Honestly, this sounds like you never seriously used C#. As a language, I like it very much. It seems to me like "C++ done right". I'm not aware of anything it can't do. At work, we use it for self-hosted services that perform remarkably well and have a robust design (with a provable "domain layer" because it's implemented in a functional way, using pure functions and immutability).

I don't use it for private projects, I use C for that. But the reason is certainly not a problem with the language. It's the lack of portability (in practice, because in theory, C# is perfectly portable for targeting a virtual machine, but then, you NEED an implementation of that virtual machine on your target platform, surprise ?) and because the runtime is really fat (offering lots of useful features of course, but I would prefer it slim and have a pool of additional libraries to pull in just when needed ... which the new .NET actually tries to achieve, but still, the "core" runtime is quite a heavy dependency).

Both of these reasons are no issues for my company, and they have nothing to do with the language itself.
 
Honestly, this sounds like you never seriously used C#. As a language, I like it very much. It seems to me like "C++ done right".
The language is good enough (Java with "extras"). I was purely referring to what is generated (bytecode requiring a VM).

Wait, your concern here is that the .NET runtime comes with a virtual machine? I'd say this is completely irrelevant in this context (except for the fact it makes using a C ABI directly impossible because this ABI is always defined in terms of the real machine the code runs on).
It is pretty much this. It strongly increases the need for bindings.

Relating to C#, I think Microsoft's C++/clr (Or Managed C++ or C++.NET or whatever they call it these days) is very good though as an approach. This is probably better evidence that a VM isn't a *complete* deal breaker to the issue of bindings. I would like to see better proof of portability with it though. If they could move over to clang rather than cl and get it working for Mono, that would be cool.

edit, and then, if you really see an issue with that, just take the example of rust instead which DOES compile to "native" machine-code and still, the rust compiler is (of course) entirely written in rust.
Indeed. Though this is closer. As mentioned above, they could bolt on a tiny C frontend (slightly similar to Golang) and continue from there.

Currently Obj-C, D, C++ have the edge here. So many people develop "C-like" languages, but so few actually go the extra mile and make a C compatible language. This is a big difference. But I suppose they would rather develop package/binding app-stores instead.
 
Until know i never ever seen even a vi-editor written in quantum-computing. So it must be still a long way to go.
Ctrl-x
Even better, running on a quantum computer, it will only save the best solution to whatever you typed in in ;)
 
...Let's start randomly with "C# is written in C". C#, in my interpretation, is the language, so when talking about it as a software product, one can only mean the compiler, and then: no, it isn't...
That would not be my interpretation. I realized when I took NAND to Tetris that a compiler is just a text-processing program. It takes program text and emits a binary suitable for executing under some runtime.

The overwhelming majority of runtimes (including the Java VM that class uses) are written at least partially in C.

As an aside, this realization is why I'm disappointed that it's so hard to cross-compile on Freebsd. Clang will happily emit ARM binaries even if it's run on an AMD64 machine. What's lacking is the tooling.

Lastly, I wonder why more compilers are not written in Perl? It is quite good at text processing.
 
Lastly, I wonder why more compilers are not written in Perl? It is quite good at text processing.
Perl is also quite good at being slow. Modern compilers are among the most complex things you may write. Perl May be of use for a proof of concept, but imagine building chromium with a compiler written in Perl. Maybe that would be a good idea, developers would be forced to write shorter code again.
I am impatiently waiting for quantum computers so that I can run AI algorithms on it using blockchain.
*Ding ding ding* Bingo!
 
What we see is after great success of C, a language being formally standardized, which enabled a lot more interoperability and stability. And I would expect any language that could ever replace C to be standardized as well, and have several competing implementations.
Standardization of programming languages existed long before C was even thought about. The first COBOL standard was done by CODASYL in the 1950s. The first FORTRAN standard done by a standards body was 1966 (thence the name FORTRAN-66). Both languages were widely successful; at some point around 1990 or 2000, 80% of all software being run on computers was still written in COBOL. So standardization does not guarantee long-term success (although FORTRAN is still being heavily used, for scientific calculations on supercomputers).

Another remark: We are all talking about replacements for C in this thread. That's interesting, because C is the most common system implementation language: the language used to implement operating systems. But that's a relatively small fraction of all software that's being written. I don't actually know what fraction of the world's overall software production is in C and C++, but my educated guess is: relatively small, perhaps 20% (or 10% or 30%) of it all.
 
We are all talking about replacements for C in this thread.
Why is necessary replacement of C? Only to make a "change"? Many new languages are created to replace C in direction "It is too complex, let invent new language which can do the same but is ... easier".
 
I don't actually know what fraction of the world's overall software production is in C and C++, but my educated guess is: relatively small, perhaps 20% (or 10% or 30%) of it all.
My guess is probably similar; especially for projects in the last decade. At risk of dredging up my obvious hang-up, if it wasn't for the issue of bindings, I imagine it would also be a lot lower for absolute percentage of C.

I think this is the reason why it is such a shame that other language platforms can't *quite* finish the job and have to plug into C right at the end.
 
Why is necessary replacement of C? Only to make a "change"? Many new languages are created to replace C in direction "It is too complex, let invent new language which can do the same but is ... easier".

Because to write C programs without memory errors, programmers have to remember and apply a bunch of rules. If there's one thing that computers do better than humans, it's remembering and applying a bunch of rules.
 
Because to write C programs without memory errors, programmers have to remember and apply a bunch of rules. If there's one thing that computers do better than humans, it's remembering and applying a bunch of rules.
This doesn't convince me.

If the question would have been "why do we need other languages than C?", then it would be fully convincing. C can be used for application development (and actually, I do just that because after lots of experience with many languages, I still like C best ...), but in that area, you really don't need all the flexibility and "low-level" mapping to the machine that C can offer, so, a language that automatically "applies a bunch of rules", as you say, can be helpful.

But the question was, why is it necessary to replace C? And nowadays, the predominant use of C is as a "systems" language, when assembler is too cumbersome and not portable enough. In C, programming some MMIO comes pretty natural. A replacement must at least make things like that possible. Sure, you can still come up with concepts that e.g. make it easier to find errors by static analysis than it is possible in C. But for systems programming, you can never release the programmer out of having to understand what exactly is going on on the machine ...
 
If I have to rely on something else to remember the rules, that makes me feel insecure. What if it applies a rule I don't want at a time I don't want it? It's like the machine taking over the programming.

I use C because I want total control. The only reason I don't use assembly is cause C makes assembly easier. Even then, the C compiler gets it wrong and I have to fix that.
 
Back
Top