That is the capability that always used to be critical, of course, at least in the past. Even today, if you take FreeBSD as an example, and you want to install and run compiled binary packages, then cpu binary compatibility is essential.
No, it is not. Where do you get your compiled binary packages from? Some automated web site. You simply say "pkg install foo" (on a Linux machine you say "apt install foo"). When running on an Intel machine (architecture=amd64), the pkg command will download the appropriate pkg for this architecture and install it. And it will work, or not depending on bugs. On a Raspberry pi (architecture aarch64), it will do exactly the same thing, and the result will be most likely exactly the same. The pkg and apt tools know how to download packages for the correct architecture. At home I have a mix of Intel and Arm (Raspberry Pi) machines, and from a user point of view, they are nearly indistinguishable (other than a huge performance difference).
The only time this problem would arise is: If I compile something myself, on machine A, then copy the resulting binary to machine B and try to run it there. For most people copying of compiled binaries (and object files) is a complete non-issue, since we either (a) install from binary packages, or (b) have the source available and compile it locally.
There is a small exception to the above rule: People who have compiled something before, and want to CHANGE the CPU in their computer to a new instruction set. This for example happened on the Mac twice, when Apple changed Power -> Intel -> Arm. It allowed people to copy their disks from the old to the new Mac, including with already compiled (or installed) programs. It dealt with the architecture change problem by shipping emulators, shipping fat binaries, and taking years for a mostly smooth transition. One of the things that made it easier for them is that Apple create a relatively well closed ecosystem (the walled garden), where binaries only very rarely come from unexpected places. But that approach does not work everywhere, such as ...
And I would imagine backwards software compatibility must be a critical selling feature for ibm Z.
Absolutely. That's because Z (or in general mainframes) are sold into markets that are highly reliant on their computer infrastructure being completely reliable. Say a typical mainframe customer (bank, insurance, ...) decides that the 5-year old mainframe model is obsolete, and want to roll a new one in. They will NOT take the risk of recompiling stuff just because there is a hardware upgrade. Matter-of-fact, if they recompile a program, they probably have to do a 4-week or 3-month testing program to make sure the newly compiled program passes their battery of tests. They may also have "dusty decks", programs for which the source code has long been lost, and they like to keep running the old binaries. Note: The bank probably has TWO OR THREE mainframe computers; on it they probably run a dozen VMs, but their production probably relies mostly on a single production VM on each of the hardware machines (which cost in excess of $1M each, $10M once you add all the accessories).
And some of the same argument applies to technical computing (the machine that controls all the robots in a giant automated warehouse), embedded computing (the thousands of CPUs in a Boing 787): In places where extremely high quality standards are needed, where older software needs to be run for many years (because changing it would bring risk), and where testing new software versions is extremely expensive, binary compatibility is vital.
So it's interesting that you say binary software compatibility is becoming less important (or perhaps I've misunderstood what you've said).
Exactly. Let's look at a very different customer, for example Facebook (a.k.a. Meta) or a similar hyper scale computer user. I'm sure they use a very fast CI/CD system, where source code changes hit production within days. The probably have a MILLION OR TEN MILLION servers (remember, the bank had TWO OR THREE), and each of the rack-mount servers costs them $1K. They recompile their code every few hours, and probably put it into some sort of containerization system such as Docker. They have some sort of complex scheduling system, which distributes millions of tasks onto millions of servers, and tasks get restarted regularly. In such a place, it's perfectly fine to replace 10% of all machines with a new CPU architecture tomorrow: the newly compiled code will be deployed on them, and started by the giant scheduling system. No CPU will ever run code that was compiled longer than a month ago. Matter-of-fact, in the FAANG, machines get regularly rebooted (weekly or monthly) to make sure OS fixes have been installed.
So, for consumers and for cloud native apps, binary compatibility is (mostly) irrelevant. For some "niche" parts of computing, it is highly important. And while those parts may sound unimportant, from a $$$ point of view, they are a huge part of the computer industry.
Little anecdote: About 25 years ago, I went to a presentation by a Stanford research group, about the state of software engineering. They started with the question: What is the largest software company in the world? Defined as the company that employs the most software engineers, and writes the most software? Obviously, this being 1998 or 2000, the members of the audience guessed things like "Microsoft" or "Sun" or "Amazon". Totally wrong. It was actually not known exactly, because big companies keep such numbers secret, but the two on place 1 and 2 were probably General Electric and General Motors. That's because those were giant companies, with a wide spectrum of products, each of which is to a large extent a software product. What was clear was that Boing was in place 3. Remember, the Boing 767 was the first airplane that was not able to lift its own software documentation when printed on paper, and that was long before the year 2000. Today, building an airplane is at least 50% a software engineering task! Of the "classical" computer companies, only one made it into the top 10 of the largest software companies: IBM. And not because of the software it wrote for sale (such as operating systems such as OS/2 or AIX or zOS), nor because of applications such as Catia or DB2. But because IBM employed several hundred thousand (!) consultants, which it rented out to other companies (such as banks and insurance companies and government agencies), which did all the software development for those companies. Of the "hip and cool" computer companies around that time, Microsoft was the highest on the list of software companies, about in place 45. Today, Google (which employs several hundred thousand software engineers) would make the list too, but I but that Boing and Airbus still give it a good run for the money.
What's the point of this anecdote? We (as advanced computer hobbyists) tend to think of the software world as the thing we download and install, like an OS (FreeBSD), a GUI/DE (KDE, Gnome, whatever), and some apps (such as spreadsheets and video editors, and definitely a browser). That all together is a tiny part of the worldwide software business. A lot more lines of code are created for things like updating the interest in bank accounts, processing medical claims at insurance companies, and making sure the little light on the dishwasher blinks at the correct time, when the dishes are done, and not before. And keeping planes, trains and automobiles running. And the amount of software in cloud companies, networking equipment, storage etc. that keeps the internet working.