Opinion : Why do supercomputers run linux

I'm not going to claim I know the answer to that question, but I'd say that incompetence and laziness could be a factor. Technology is increasingly being used to prevent people from having to do anything.

Research Confirms It: We Really Are Getting Dumber

Born this way: Evolution has hardwired us to be lazy

If you look at many technologies that are becoming popular, such as Ansible and Wi-Fi, it is mainly so that people in the IT sector have to do less work (convenience).

Ubuntu is one of the most popular supercomputing systems. And which one is the most user-friendly Linux? You see this (convenience) has become one of the biggest factors, although you would actually assume that IT engineers should be able to work well with operating systems. This hasn't been the case for some time now. There are always exceptions.

Ubuntu is one of the slowest Linux systems currently:

I'd say it's just the slowest right now, because at least openSUSE is still fast in Apache and NginX. They use that for supercomputers.
How many of these supercomputers use Clear Linux?

I guess I don't need to draw any further.

NanoVMs are faster and more secure than Ubuntu, but what percentage of engineers have made even the slightest effort to learn the basics of NanoVMs?
 
Wifi allows you mobility, Ubuntu still lets you access under the hood. We all seek facilities as long as abstractions do not stand in their way. A matter of choice.
 
Opinion : Why do supercomputers run linux ?

I have two factors combined:
  1. Linux has much more sponsors behind that push for it;
  2. Scholars and researchers barely know Linux and never heard about FreeBSD (based on the assumption that Linux on Desktop is 2% of the total).
 
Because it is open source. There is source code and a whole bunch of hardware drivers available. And it can be modified at will. And since they can afford it, they pay, the licenses, and the drivers.

And actually updating windows would crash the supercomputer. Banks have proprietary operating systems. Written from the ground up.
 
Because the original Beowulf cluster, which is nowadays how most supercomputer are running, ran Linux.

Also in my opinion because IBM really early end of the 1990s migrated Linux to their mainframe range. And because supercomputers are mostly in academic environments, where Linux normally is the standard and not FreeBSD so much.

Also: better NUMA/SMP support and lots of other in Linux in the beginnings of that switch, which are important for HPC and FreeBSD simply was not ready back then, but Linux already was or on its way.
 
I'm not going to claim I know the answer to that question, but I'd say that incompetence and laziness could be a factor.
Nonsense. Places that buy supercomputers typically spend 10s or 100s of M$ on them; the largest ones are getting to be near 1G$. They typically have teams of dozens or hundreds of highly skilled people to plan, evaluate, specify and operate them. They are absolutely not lazy or incompetent.

Ubuntu is one of the most popular supercomputing systems.
Wrong, the top of the list is dominated by RedHat and SUSE (often rebranded, for example as "Cray OS"). There are a few Ubuntu machines too. A lot of the OS installation on hero-class systems are highly modified and tuned, so what distro it is based on matters little.

I have two factors combined: ... Linux has much more sponsors behind that push for it;
The groups that buy and build these supercomputers have so much money, they don't need sponsors. They are the sponsors.

The definitive answer in the mouth of another. ... (then pointing at Don Becker)
That's an important part of the answer: The Linux networking stack has been very good since the mid 90s, and has gotten better and better. And when I say "networking stack", I don't just mean ethernet device driver, socket and select() calls, but much more exotic technologies, like user-space networking, memory-mapped IO, Infiniband, tight integration with MPI, zero copy, and so on. Supercomputers rely heavily on fast networking.

Because it is open source.
There are many other open source OSes, which supercomputers do not use. Matter-of-fact, until about 10 years ago, there were supercomputers running BSD. Today there are none left.

On the contrary: The last non-Linux OS that was in use on supercomputers was closed source: AIX (by IBM). There were also a number of Windows installations!
[/quote]

Also: better NUMA/SMP support and lots of other in Linux in the beginnings of that switch, which are important for HPC and FreeBSD simply was not ready back then, but Linux already was or on its way.
That's another important factor. HPC machines had many cores early on. Good SMP support is vital to them. The ability to carefully control NUMA (memory placement for threads and processes) can be very important; in modern architectures, the performance penalty of having the RAM attached to the "wrong" CPU can slow things down by a significant factor.

I think in summary, it's quite easy: Linux has had really good networking stacks and NUMA support, for a long time. Many vendors and users have been able to further tune and optimize it. Sometimes, vendor- or user-specific optimizations get shared in the common source base upstream, which doesn't happen for closed-source systems. But that isn't a large factor: A big manufacturer of supercomputers (such as Cray or IBM) has enough staff, they can do these things themselves.
 
Back
Top