What if Oracle hadn't bought Sun ?

Well, 68K had 32-bit instruction set, with 32-bit registers but 16-bit internal data bus and 16-bit ALU. That’s where Atari ST got its name from - Sixteen/Thirty-two. So, it’s still not fair to compare 68000 with full 32-bit CPUs 😁
I love looking back at the specs on stuff like this. "Direct access of up to 64KB memory, clock frequency up to 1MHz with later versions up to 2MHz".
Couples perfectly with a 300baud acoustic modem.
 
I love looking back at the specs on stuff like this. "Direct access of up to 64KB memory, clock frequency up to 1MHz with later versions up to 2MHz".
Couples perfectly with a 300baud acoustic modem.
Do check 6809, it was so much better CPU; pity that Motorola dropped it and decided that 68008 will be their low-cost CPU to go with.

Just had funny thought, Motorola missed a chance for what could be hilarious joke – they could name 68010 - 68009, and then carry it over up to 68069.

You know how Apple used suffix ‘x’ in the name for their 32-bit Macintosh line expect for SE/30, for some strange reason they decided not to use ‘x’ suffix for 32-bit SE; now imagine if there was 68069 upgrade for it 🤣
 
  • Like
Reactions: mer
I wish there was a way that Sun could be given away or sold by Oracle to be an independent nonprofit foundation. Many to former people at Sun MicroSystems. He may have no reason to, but he's already the richest person on the planet. What does he need more money for? IBM let Eclipse be its own foundation.

IBM started Eclipse, then let them be an Independent nonprofit organization. While Independent of each other now, they still work closely together.
If IBM bought Sun, it would only be to strip the assets and close them down as a competitor, IMHO. And IBM had already grabbed the only piece of technology that Sun had that IBM actually wanted, namely Java,
Most of what Eclipse does in in Java.


Sun MicroSystems may no longer be around, but they left a legacy on the open source software and the hardware worlds. ZFS, influences on licenses by CDDL, Sparc processors, OpenOffice, a suite of programs still around. Sparc may not have been popular, but it had opensource in its design. Risc is open source, and Arm allows licensing fees to produce own hardware. Don't know if Sparc could compete with RISC now for opensource hardware. Or if Sparc design can live alongside RISC-V.
 
That is the capability that always used to be critical, of course, at least in the past. Even today, if you take FreeBSD as an example, and you want to install and run compiled binary packages, then cpu binary compatibility is essential.
No, it is not. Where do you get your compiled binary packages from? Some automated web site. You simply say "pkg install foo" (on a Linux machine you say "apt install foo"). When running on an Intel machine (architecture=amd64), the pkg command will download the appropriate pkg for this architecture and install it. And it will work, or not depending on bugs. On a Raspberry pi (architecture aarch64), it will do exactly the same thing, and the result will be most likely exactly the same. The pkg and apt tools know how to download packages for the correct architecture. At home I have a mix of Intel and Arm (Raspberry Pi) machines, and from a user point of view, they are nearly indistinguishable (other than a huge performance difference).

The only time this problem would arise is: If I compile something myself, on machine A, then copy the resulting binary to machine B and try to run it there. For most people copying of compiled binaries (and object files) is a complete non-issue, since we either (a) install from binary packages, or (b) have the source available and compile it locally.

There is a small exception to the above rule: People who have compiled something before, and want to CHANGE the CPU in their computer to a new instruction set. This for example happened on the Mac twice, when Apple changed Power -> Intel -> Arm. It allowed people to copy their disks from the old to the new Mac, including with already compiled (or installed) programs. It dealt with the architecture change problem by shipping emulators, shipping fat binaries, and taking years for a mostly smooth transition. One of the things that made it easier for them is that Apple create a relatively well closed ecosystem (the walled garden), where binaries only very rarely come from unexpected places. But that approach does not work everywhere, such as ...

And I would imagine backwards software compatibility must be a critical selling feature for ibm Z.
Absolutely. That's because Z (or in general mainframes) are sold into markets that are highly reliant on their computer infrastructure being completely reliable. Say a typical mainframe customer (bank, insurance, ...) decides that the 5-year old mainframe model is obsolete, and want to roll a new one in. They will NOT take the risk of recompiling stuff just because there is a hardware upgrade. Matter-of-fact, if they recompile a program, they probably have to do a 4-week or 3-month testing program to make sure the newly compiled program passes their battery of tests. They may also have "dusty decks", programs for which the source code has long been lost, and they like to keep running the old binaries. Note: The bank probably has TWO OR THREE mainframe computers; on it they probably run a dozen VMs, but their production probably relies mostly on a single production VM on each of the hardware machines (which cost in excess of $1M each, $10M once you add all the accessories).

And some of the same argument applies to technical computing (the machine that controls all the robots in a giant automated warehouse), embedded computing (the thousands of CPUs in a Boing 787): In places where extremely high quality standards are needed, where older software needs to be run for many years (because changing it would bring risk), and where testing new software versions is extremely expensive, binary compatibility is vital.

So it's interesting that you say binary software compatibility is becoming less important (or perhaps I've misunderstood what you've said).
Exactly. Let's look at a very different customer, for example Facebook (a.k.a. Meta) or a similar hyper scale computer user. I'm sure they use a very fast CI/CD system, where source code changes hit production within days. The probably have a MILLION OR TEN MILLION servers (remember, the bank had TWO OR THREE), and each of the rack-mount servers costs them $1K. They recompile their code every few hours, and probably put it into some sort of containerization system such as Docker. They have some sort of complex scheduling system, which distributes millions of tasks onto millions of servers, and tasks get restarted regularly. In such a place, it's perfectly fine to replace 10% of all machines with a new CPU architecture tomorrow: the newly compiled code will be deployed on them, and started by the giant scheduling system. No CPU will ever run code that was compiled longer than a month ago. Matter-of-fact, in the FAANG, machines get regularly rebooted (weekly or monthly) to make sure OS fixes have been installed.

So, for consumers and for cloud native apps, binary compatibility is (mostly) irrelevant. For some "niche" parts of computing, it is highly important. And while those parts may sound unimportant, from a $$$ point of view, they are a huge part of the computer industry.

Little anecdote: About 25 years ago, I went to a presentation by a Stanford research group, about the state of software engineering. They started with the question: What is the largest software company in the world? Defined as the company that employs the most software engineers, and writes the most software? Obviously, this being 1998 or 2000, the members of the audience guessed things like "Microsoft" or "Sun" or "Amazon". Totally wrong. It was actually not known exactly, because big companies keep such numbers secret, but the two on place 1 and 2 were probably General Electric and General Motors. That's because those were giant companies, with a wide spectrum of products, each of which is to a large extent a software product. What was clear was that Boing was in place 3. Remember, the Boing 767 was the first airplane that was not able to lift its own software documentation when printed on paper, and that was long before the year 2000. Today, building an airplane is at least 50% a software engineering task! Of the "classical" computer companies, only one made it into the top 10 of the largest software companies: IBM. And not because of the software it wrote for sale (such as operating systems such as OS/2 or AIX or zOS), nor because of applications such as Catia or DB2. But because IBM employed several hundred thousand (!) consultants, which it rented out to other companies (such as banks and insurance companies and government agencies), which did all the software development for those companies. Of the "hip and cool" computer companies around that time, Microsoft was the highest on the list of software companies, about in place 45. Today, Google (which employs several hundred thousand software engineers) would make the list too, but I but that Boing and Airbus still give it a good run for the money.

What's the point of this anecdote? We (as advanced computer hobbyists) tend to think of the software world as the thing we download and install, like an OS (FreeBSD), a GUI/DE (KDE, Gnome, whatever), and some apps (such as spreadsheets and video editors, and definitely a browser). That all together is a tiny part of the worldwide software business. A lot more lines of code are created for things like updating the interest in bank accounts, processing medical claims at insurance companies, and making sure the little light on the dishwasher blinks at the correct time, when the dishes are done, and not before. And keeping planes, trains and automobiles running. And the amount of software in cloud companies, networking equipment, storage etc. that keeps the internet working.
 
Do check 6809, it was so much better CPU; pity that Motorola dropped it and decided that 68008 will be their low-cost CPU to go with.
Go listen to the history of the 68000; there are lots of interviews with the product and design teams at the Computer History Museum web site. In those days, Motorola was a medium company in the CPU market, and the large ones were Intel and Zilog. Motorola knew it had to do a 16-bit machine to compete with the 8086 and Z8000, but they didn't have the money. The 6809 was a way to squeeze some more revenue out of the existing 6800 design, with "minimal" investment.

But quite a few engineers inside Motorola knew how to build a 16/32-bit machine, and eventually a customer was found that was going to buy a lot of them, which green lighted the development of the 68K. Want to guess who that customer was? General Motors! While we all talk about Apple, Atari, Amiga, and the Sinclair, the real usage of computers was not in the things computing amateurs have in their living rooms or bedrooms, but in either embedded computing or commercial IT (back then known as DP). And because Motorola knew that they were behind compared to competitors like Intel and Zilog, they had to make their new design particularly good: twice the performance/dollar of Intel and Zilog. And a much cleaner architecture, which allowed users to deploy larger systems and larger programs (that's why the 24 bit address bus), and have a roadmap to faster and more powerful CPUs (such as ones that were internally full 32 bit, not using a 16-bit ALU twice). And the result was the 68K.

Once the 68K existed, there was no point putting any effort into the 6809 any longer. The 68008 could do anything the 6809 could do (admittedly, with 8 more pins), and do it faster and better. A friend had a 6809 home computer (the Eltec Eurocom-II), and around 1980 that became a dead end; still was a great machine (memory-mapped video with multiple banks made for a great user interface).

I've had the good fortune to hear this story from a friend, who was at the center of it (chief architect of the 68K). He's also the guy who roasted Andy Grove at the microprocessor forum.
 
My microprocessor engineering course at uni was taught on motorola kit... 6800 series, then 68000, exormacs, etc. And we did Z80 too, but most of it was on motorola. I remember the lecturers preferred the motorola kit and despised the intel crap. It's a real shame they've gone. I remember moving from 68000 to 8086 and the nightmare of the NEAR and FAR crap... the motorola stuff was much better engineered. Don't even mention the bloody 8051...
 
No, it is not. Where do you get your compiled binary packages from? Some automated web site. You simply say "pkg install foo" (on a Linux machine you say "apt install foo"). When running on an Intel machine (architecture=amd64), the pkg command will download the appropriate pkg for this architecture and install it. And it will work, or not depending on bugs. On a Raspberry pi (architecture aarch64), it will do exactly the same thing, and the result will be most likely exactly the same. The pkg and apt tools know how to download packages for the correct architecture. At home I have a mix of Intel and Arm (Raspberry Pi) machines, and from a user point of view, they are nearly indistinguishable (other than a huge performance difference).
Well, I can see what you're getting at, but supporting multiple instruction sets and hardware architectures rapidly becomes a major burden in production; it's bad enough having to support multiple iterations of x86 itself, just ask microsoft! :) Java was yet another attempt to obviate that burden, but the worlds computing still mostly runs on compiled code, not JVM's; remember when operating systems were going to be written in Java and run on top of JVM's... which themselves ran on Sun JVM-optimised chips, the one they gave away in the 'java ring'? I had one of those rings once... I wish I'd kept it, they are probably a collector's item now, like my genuine PC-AT from 1985. Ah, yes.. and thin clients... 😁

Yes maybe the particular hardware a piece of software is being run on is slowly becoming less important. But I only have to point to microsoft's multiple failed attempts to get desktop windows onto ARM variants (or, longer ago, alpha, and perhaps POWER if they cooperated with ibm on that) for an example of the power of an entrenched market standard, in this case x86 PC's. None of those other architectures ever got any real market share compared to x86, despite the much lower cost of ARM processors.

Even intel themselves have made attempts to supercede x86, I remember a talk by an intel tech marketing rep some time around the early 90s telling us that x86 was 'legacy' and the future would be i860 'super PC's' (that was the term he used), and that we needed to move our software to that platform; but the i860 has disappeared into the mists of time, while we are still running on x86, I've got one sitting on my desk in front of me right now. MS and ARM are trying again now, with the snapdragon X, but it remains to be seen what kind of market share it will achieve.

So I think I will have to respectfully disagree with your argument, at least to some extent. But I will conceed that particularly in the datacentre, through the deployment of vm's and containers, platforms are becoming more architecture-agnostic, and factors like power cost and cooling are highly significant. If the datacentre tech like vm's and software distribution using containers moves onto the desktop, then yes, that will enable competing desktop architectures versus x86, and perhaps that is already starting to happen.
 

Of course this is a projection. I remember being shown similar graphs back around 1988 when Acorn were first going around the country giving demonstrations of their ARM risc chipset, when they were able to demonstrate superior performance to the current x86 chip, probably a 386 at the time. I think it's more of a 'hope' than a 'fact'. And this takes no account of the chinese push to RISC-V, which if it actually happens may well turn out to be a significant factor given their large scale manufacturing capacity, and if we accept your contention that that target architecture is not important. If we believe this graph, ARM had only about 10% market share versus x86 at 90% in 2023.
1758189820633.png


Meanwhile at the present time, intel and amd's x86 market shares of the desktop space looks like this. To which I say 'QED', at least for the present time. Once a particular protocol becomes established in the market, it becomes very difficult to replace it with an alternative equivalent protocol that competes with the established one, think of MS-windows versus mac, for example, or VHS versus betamax; and that rule applies to all kinds of fields of human endeavour, not just computing. :)

1758189999710.png
 
ARM had only about 10% market share versus x86 at 90% in 2023.
That's because hardly anyone realizes that ARM includes Raspberry Pi's. When SBC's take off, there will be more. A lot of BSD's except NetBSD understand the importance of GPU drivers for ARM. They use drivers for VideoCore. We're still stuck with the assumption that everything uses Nvidia, Radeon and Intel GPU's. Arm uses VideoCore for GPU drivers.

It's also a different market. SBC's will grow for low-cost 32bit retro computing. There will be a niche for that for decades. It will replace most old 32bit physical computers for use. As long as there's a use for a DOS, terminal computer, specialist, router/firewall, retro gaming, NAS or minimal desktop computer, 32bit ARM will be relevant.

While FreeDOS isn't available for ARM, PDOS is, which it replaces most full screen Windows programs for 32bit computing.

For Routers for Arm32, there's the BSD OS Zrouter.

I'd like to see use of ARM32 and Risc (32bit) expand for routers, NAS, terminal use, fullscreen graphics use and retro gaming. So, anyone from around the world can buy one of these and have a hobbyist or productive use system.
 
I was actually surprised it was only 10%. There must be many more phones than PC's shipped nowadays, and x86 got nowhere on the phone, I think intel did try with some atom variants but they got no market share with them. And for that matter MS got nowhere with Windows-ME either, remember that fiasco. Perhaps that study was only looking at compute, ie desktops and servers, and as you say, not including rPi's, I don't think they have considered embedded either. Well, it's just a market projection, there are always plenty of those :)
 
speculation:
x86 will be eventually kicked out of everything. the "windows [gaming] pc" will be the last to move to arm/riscv/whatever.
 
But I only have to point to microsoft's multiple failed attempts to get desktop windows onto ARM variants (or, longer ago, alpha, and perhaps POWER if they cooperated with ibm on that)
DEC Alpha was fully supported by NT; MS had to pay ~$100Mil not to be sued, and to promise to maintain NT support for the Alpha processor. MS was forced to do that because Cutler and engineers who previously worked at DEC copied a lot of VMS features and almost whole MICA. It was Compaq who ditched NT for Alpha. Please see The Rest of the Story

Also, NT worked on PowerPC specifically PReP (PowerPC Reference Platform), check ThinkPad Power Series. BTW, Gates was negotiating with Apple, but at the end Apple didn’t make PowerPC Macs PReP compliant.
Even intel themselves have made attempts to supercede x86, I remember a talk by an intel tech marketing rep some time around the early 90s telling us that x86 was 'legacy' and the future would be i860 'super PC's' (that was the term he used), and that we needed to move our software to that platform; but the i860 has disappeared into the mists of time, while we are still running on x86, I've got one sitting on my desk in front of me right now.
Full me once… Well, how Intel managed to full themselves (and everybody else) twice, is beyond me. i860 had the same type of problem that Itanium had – it will work great, only if there was some compiler that will somehow automagicaly optimize for it. 🤦‍♂️No one ever made such a compiler, and everyone was left with two snails (or snail and potato, they weren’t close relatives)
 
Covacat said
"speculation:
x86 will be eventually kicked out of everything. the "windows [gaming] pc" will be the last to move to arm/riscv/whatever."

I think it will be a sad day if that happens, if it means a return to closed architectures. I am very pleased that we DO have the x86/PC industry standard, I think it's the best thing that has happened to computing in my lifetime, and is a very strong democratizing factor in enabling world-wide access to computers. I still remember what the world was like BEFORE the PC, namely a hell of multiple incompatible competing architectures from different companies, everything super expensive. Once the PC architecture became established, any company with the capability could start to manufacture them, and that created the global mass market with a single open standard architecture... and that is still true today (although they are trying to close it, or raise the barriers to entry). The end result is that I can buy a great little machine with every feature you can imagine for just over $100-200, which knocks the equivalent priced closed-architecture rPi (for example) into the ground for software compatibility and hardware capability. Having an open architecture might not be good for the profits of companies like IBM, who after all tried to kill it with the PS/2, but it's good for everyone else. If IBM (and the rest, they're all the same) still had their way you would be paying $10000 for that same machine, not $100.
 
DEC Alpha was fully supported by NT;
Yes, agreed, what I was trying to say was that the alpha was a failure in the market place, despite microsoft's support. The alpha did not take any substantial market share from x86 when it was launched, despite being technically superior, and eventually disappeared when DEC went bust.
 
x86 will be eventually kicked out of everything. the "windows [gaming] pc" will be the last to move to arm/riscv/whatever.
RiscV is 64bit. Risc is 32bit. I want to see x86 replaced largely by Risc 32bit and Arm 32 bit.

I wish the Solaris or Sun name would be sold or given up to those Illuminos projects. Then, I'd like to see a reusable more universal variant of CDDL1.1 for these projects. OpenOffice was given to Apache Foundation by Oracle.

And that Sparc gets revived to work alongside RiscV, even for specialized purposes. RiscV is owned by nonprofit RISC-V International. Sparc is owned by a for profit, Sparc International.

Sun MicroSystems past projects need to live on, even if not the name of that company isn't revived as a nonprofit Foundation.
 
Yes, agreed, what I was trying to say was that the alpha was a failure in the market place, despite microsoft's support. The alpha did not take any substantial market share from x86 when it was launched, despite being technically superior, and eventually disappeared when DEC went bust.
Alpha outlived DEC, Compaq was promoting it heavily, it was HP that ditched Alpha for the promise of “glorious and “superior” CPU from Intel, that turned to be a potato, not a CPU.

I’m still pissed at HP for killing Alpha as much I’m pissed at Adobe for killing FreeHand! 🤬
 
Still, if I'm wrong and ralphbsz is right after all, especially what he says about how the FAANG companies work, then the writing may be on the wall for intel and x86. It's hard to know. Will a world of ARM machines be any better? Or risc-V? And will China take total control of this strategic industry? Once it's gone, it will be gone for good, like the rest. That would not be a good outcome for the west, IMHO. You don't know what you've got, till it's gone.
 
Go listen to the history of the 68000; there are lots of interviews with the product and design teams at the Computer History Museum web site. In those days, Motorola was a medium company in the CPU market, and the large ones were Intel and Zilog. Motorola knew it had to do a 16-bit machine to compete with the 8086 and Z8000, but they didn't have the money. The 6809 was a way to squeeze some more revenue out of the existing 6800 design, with "minimal" investment.
The 68000 had a lovely logical design, all registers equivalent, much nicer to program than the 8086. My uni lecturers loved it because it was so much easier to use as a teaching tool, and for developing hardware designs. We had quite a few 68k vmebus hardware projects on that couse. I personally had both 6809 and 68008 micros; the 6809 was a Dragon which was similar to the Tandy (can't remember tandy model number), I had the 64K version, that was actually a nice machine. And the 68008 micro I had was a Sinclair QL, which if only the implementation had been better could have really been a milestone, but in typical sinclair fashion they cheapened out on the build quality so they had all kinds of problems once they shipped them. Of course it's kind of sad from a personal perspective that there aren't british companies producing computers like this now. I guess at least we still make the rPi!

For a while it really looked like 68k was going to be a mainstream architecture, with companies like Sun using it in their workstations. I think 68k lived on in the 'dragonball' processors that were used in some handhelds, but I'm not sure, once I got onto working on intel I lost touch with the world of 68k. But I always thought it was a much more logical design than the intel stuff.

Some ancient history... :)

1758202228691.png

1758202279853.png
 
And this is what you can get nowadays for your $100....
When you think how many hours of work it takes to earn the money to buy it, it's truly incredible what you get nowadays, compared to the past. Runs FreeBSD, too! :)



Today's price is 83 GBP for 8 GB RAM and 256 GB SSD... google says that's $112. Crazy. They will be giving them away in packets of cornflakes next. I think there are even cheaper ones too... the GMKtec ones are quite good quality, I've got a couple of them. You can connect three 4K monitors to it..! The N100 is a nice chip, Intel still has some nice products, they are not all junk. It's a shame they seem to have screwed up the P-cores on their recent higher performance cpus.
 
the 6809 was a Dragon which was similar to the Tandy (can't remember tandy model number)
TRS-80 Color Computer, later Tandy Color Computer, or better known as CoCo (also 2 and 3)

Not to be confused with TRS-80 Model I/II/III which were Z80 machines.

Dragon was very similar, but not entirely compatible, CoCo BASIC needed to be re-tokenized for example, but it was possible to swap ROMs and made one machine to behave exactly as the other (and some rewiring, but I forgot details).

RoboNuggie also had Dragon 😉
 
TRS-80 Color Computer, later Tandy Color Computer, or better known as CoCo (also 2 and 3)

Not to be confused with TRS-80 Model I/II/III which were Z80 machines.

Dragon was very similar, but not entirely compatible, CoCo BASIC needed to be re-tokenized for example, but it was possible to swap ROMs and made one machine to behave exactly as the other (and some rewiring, but I forgot details).

RoboNuggie also had Dragon 😉
Ah yes, that was it :cool: Yes I was getting confused remembering the Tandy Z80 boxes. The Dragons were made at a factory in Wales, like the rPi is today. It was a nice machine, lots of ports and a high quality keyboard. The BBC micro was still superior, but at twice the price of the Dragon. And arguably the 6809 was a better chip than the 6502.
 
How about this one?: What if Apple had chosen BeOS instead of NeXTSTEP as the base successor to the classic MacOS?🍿
They will probably be long gone now, considering that Jobs practically resurrected Apple from the edge of bankruptcy, even as his first official role when he got back was only consultant (he was back to be a CEO only three years later), but in the meantime he made a lot of crucial decisions for Apple survival and growth. IMHO, it’s a good thing that Jean-Louis Gassée was asking for more than twice what Apple was ready to pay, and Apple told him to go pound sand.
 
They will probably be long gone now, considering that Jobs practically resurrected Apple from the edge of bankruptcy, even as his first official role when he got back was only consultant (he was back to be a CEO only three years later), but in the meantime he made a lot of crucial decisions for Apple survival and growth. IMHO, it’s a good thing that Jean-Louis Gassée was asking for more than twice what Apple was ready to pay, and Apple told him to go pound sand.
Yeah, my thoughts exactly. They absolutely needed Jobs to come back when he did, they were teetering on the brink of oblivion, with no vision and a muddled product line. And basing MacOS X on a robust, mature UNIX-like OS was definitely a better move than the immature BeOS, as cool as it was.
 
Back
Top