FreeBSD on zSeries

Reading Dr. Colin Percival's blog post on running FreeBSD on AWS Firecracker makes me wish someone would port FreeBSD to the zSeries architecture.

In the first decade of the 2000s, a company called Sine Nomine Associates was working on porting OpenSolaris to the zSeries, and had gotten as far as releasing a public demo. Then Oracle bought Sun Microsystems and killed off OpenSolaris, and the project was cancelled.

The FreeBSD modifications for Firecracker reminded me of the that project, because just as FreeBSD does I/O using simplified virtio devices, the Sine Nomine port did I/O using DIAGNOSE instructions that talked to the underlying zVM.

The DIAGNOSE instruction was originally an undocumented instruction used by the MVS operating system to do model-specific things like reconfiguring the hardware. In the IBM zArchitecture Principles of Operation, which documents the zSeries architecture, it says "The CPU performs built-in diagnostic functions, or other model-dependent functions. The purpose of the diagnostic functions is to verify proper functioning of equipment and to locate faulty components. Other model-dependent functions may include disabling of failing buffers, reconfiguration of CPUs, storage, and channel paths, and modification of control storage". However, zVM repurposed the DIAGNOSE instruction as a way to talk to the hypervisor. Normally, programs use a Supervisor Call (SVC) instruction to context switch into the kernel, but using DIAGNOSE to talk to the hypervisor doesn't interfere with the guest operating system's use of the SVC instruction.

The Sine Nomine port used DIAGNOSE instructions for block I/O, network I/O, I/O discovery, and memory discovery. This greatly reduces the porting effort.

I really wish someone who likes FreeBSD and has money to spare would hire Sine Nomine Associates to resurrect the work they did on the OpenSolaris port and do a FreeBSD port. (Hey, it makes more sense than paying to send a car to Mars, at least to me.)

One can only hope.
 
Reading Dr. Colin Percival's blog post on running FreeBSD on AWS Firecracker makes me wish someone would port FreeBSD to the zSeries architecture.

In the first decade of the 2000s, a company called Sine Nomine Associates was working on porting OpenSolaris to the zSeries, and had gotten as far as releasing a public demo. Then Oracle bought Sun Microsystems and killed off OpenSolaris, and the project was cancelled.

The FreeBSD modifications for Firecracker reminded me of the that project, because just as FreeBSD does I/O using simplified virtio devices, the Sine Nomine port did I/O using DIAGNOSE instructions that talked to the underlying zVM.

The DIAGNOSE instruction was originally an undocumented instruction used by the MVS operating system to do model-specific things like reconfiguring the hardware. In the IBM zArchitecture Principles of Operation, which documents the zSeries architecture, it says "The CPU performs built-in diagnostic functions, or other model-dependent functions. The purpose of the diagnostic functions is to verify proper functioning of equipment and to locate faulty components. Other model-dependent functions may include disabling of failing buffers, reconfiguration of CPUs, storage, and channel paths, and modification of control storage". However, zVM repurposed the DIAGNOSE instruction as a way to talk to the hypervisor. Normally, programs use a Supervisor Call (SVC) instruction to context switch into the kernel, but using DIAGNOSE to talk to the hypervisor doesn't interfere with the guest operating system's use of the SVC instruction.

The Sine Nomine port used DIAGNOSE instructions for block I/O, network I/O, I/O discovery, and memory discovery. This greatly reduces the porting effort.

I really wish someone who likes FreeBSD and has money to spare would hire Sine Nomine Associates to resurrect the work they did on the OpenSolaris port and do a FreeBSD port. (Hey, it makes more sense than paying to send a car to Mars, at least to me.)

One can only hope.
Well, you can keep your eyes peeled for a wishlist. I remember someone from the Foundation once did that a few years ago right here on these Forums - started a thread and asked Forums users for their wishlist for FreeBSD.

Just keep in mind, this is Open Source. People work on stuff if they feel like it, if they have the time, skills, and the appropriate hardware. There's nothing stopping you from setting up shop, getting some working 'proof of concept', spreading the word, and getting the code accepted by the primary developer group. But it takes skills, guts and time to dive into a major project like that.

Many years ago, I've seen information that NetBSD is easier to port to an architecture - After all, it was NetBSD that someone installed on a toaster in Los Angeles back in 2003! I remember reading an interview with that guy, and he said that NetBSD's sources were just very cleanly organized into machine-independent parts like userland and machine-dependent parts like instruction sets and compilers. Even more so than other OSes.
 
The problem with doing any development work on a z is getting access to the hardware, and getting the zOS for it. If you have a large amount of money, that's easy: Buy a z, get the OS license, get support, and start playing. I don't know how much "large amount" is, but even within IBM, getting access to physical z hardware is difficult, and requires lots of internal funding. I don't know how good the Hercules emulator is at emulating modern z models, and how to get a copy of current zOS when outside IBM.

The good folks at Sine Nomine are old-time mainframe people, who do a lot of z development. They could do things like this, but being a commercial company, they probably would need to see the market and revenue in it.
 
The 2008 Sine Nomine demo said it would run on Hercules 3.07, but without network support. But you would still need a z/VM license.

I agree, this is not something a bunch of volunteers are likely to be able to pull off. And I don't know if you could get a bunch of companies to do speculative funding of it. A lot of companies that run servers like FreeBSD. Perhaps IBM could fund it as a way to encourage people to move their servers to zSeries, as they have with Linux. (This would be similar to NVIDIA buying PGI, which made a compiler for doing parallel-processing Fortran programs that ran on CUDA devices. The compiler is now free, but it encourages people to use more NVIDIA GPU cards.)

Ideally, whoever funded it would hire Sine Nomine to do it, since they could probably reuse a lot of the work they did for OpenSolaris.
 
But you would still need a z/VM license.
Does anyone know how much that license would cost?

The last time I priced it was inside of IBM, and even with the "one IBM division sells it to another IBM division", the price for a simple single-mode z/VM and z/OS license including tech support was a 5-digit amount per month, when deployed on an emulated mainframe (using Hercules on an Intel chip). Inside IBM, there was an option of getting it for free without any support, but that wouldn't work for outsiders.

And what incentive would IBM have to support this? How many people run FreeBSD servers, and what fraction of them would want to move to running on mainframes?
 
Just reading this thread makes me see the Open Source movement in a different light. The Open Source movement had its roots in RMS getting pissed that a colleague at MIT would not share the source code to a printer driver.

How many people even know about z/OS' very existence, let alone how much a license would cost, and how much compatible hardware would cost? General public may have heard of Linux. Among Linux users, not everybody even heard of BSD's.

With this kind of backdrop, I'm seeing 'Open Source software running on commodity hardware' as on the opposite end of the spectrum from 'IBM's z/OS running on mainframes, preferably z/Series ones'.

The funny thing about that spectrum is that when everything is said, done, compiled, assembled, rendered, and computed, both ends are capable of accomplishing pretty much the same results, and the only real difference is how much money was blown to accomplish the same said result. 😂
 
and the only real difference is how much money was blown to accomplish the same said result. 😂
IBM claims (with reasonable justification) that mainframes are the most cost-effective way to run Linux servers. I think a few years ago they set a world record, with one mainframe running 35,000 (thirty-five thousand) Linux VMs on a single host. Admittedly, that host occupies a whole rack, and costs about a M$.

IBM put how many Billion$ into Redhat? I'd gladly take a tiny fraction of that for FreeBSD.
When I worked at IBM, before they acquired RedHat, IBM's "Linux Technology Center" had many thousand employees, of whom dozens were kernel developers, and at least a few thousand application-level developers. Open source is no longer about a college student spending evenings (between bottles of beer) hacking at their little hobby project. It's big industrial-scale software development
 
Open source is no longer about a college student spending evenings (between bottles of beer) hacking at their little hobby project. It's big industrial-scale software development
When software development gets to the point of being big, industrial-scale, it kind of stops being Open Source - the freedom to work on what you want, when you want, and how you want - that is traded for a paycheck.

Admittedly, there are benefits to this - some stuff like office suites (LibreOffice comes to mind) would probably never materialize without commercial-grade efforts. One would need to organize the development process into something coherent, complete with quality/security audits, filtering out useless code, telling people 'Sorry, we need real skills, and it is abundantly clear you do not have those skills'. An Open Source project doesn't always have the resources to be organized to such an extent.

And that comes in addition to the licensing, which is another can of worms.

My take is: It's kind of fun to have an online conversation about the challenges facing an Open Source project. Basically, educating the world about the harsh realities that await anyone who wants to dive into Open Source world. A bit like swimming in the ocean - One can take it as a personal attack when somebody else points out 'you can't swim', or one can take swimming lessons before trying to figure out what's so fun about playing in the waves.
 
Open source is no longer about a college student spending evenings (between bottles of beer) hacking at their little hobby project. It's big industrial-scale software development
The way I see it is you have two sides to open-source.
  • The stuff that only large commercial entities can pull off (i.e bringing up a new cutting edge architecture in the kernel)
  • The stuff that only students with their hobby projects can pull off (i.e writing drivers for a 10 year old printer)
They rarely cross over. A commercial company often has far too much bureaucracy to do small-scale (sometimes innovative) things. Likewise the independent developer can hardly undertake a massive industry shaking project on their own. And yet, open-source *needs* both to survive. The digital landscape is messy and sometimes that 10 year old printer is a key piece of infrastructure.

So I would suggest open-source is still about college students improving the digital world. Only now the commercial world is starting to mature and understand that private source-code isn't quite so valuable anymore (license exploitation is much more lucrative ;)).

The last time I priced it was inside of IBM
What about the old stuff that is pretty much obsolete? Does IBM really just landfill it all or does it have any kind of second hand market?
 
When software development gets to the point of being big, industrial-scale, it kind of stops being Open Source - the freedom to work on what you want, when you want, and how you want - that is traded for a paycheck.
There are several orthogonal axes here. One is whether the source code of the result is released and freely usable (let's not quibble over license differences such as BSD versus GPL). Related to that is whether one needs to pay for a ready-to-install copy or not. And whether paid high-quality support is available.

Another set of axes is the development model. Are developers paid for the open-source work? Is the development done in a large group, or by a single developer? Is the project done by one person, or are there lots of others? Are there schedules, goals, and management, or do developers work on whatever they want, or feel is important? Are participants evaluated (interviewed) before joining the project? Is their work checked before being accepted into the main source repository?

And yet another question is whether the development is split into development and test, with dedicated test engineers, quality goals, test harnesses? Is there a single official repository, or is it forked all over? Is there one chief architect, czar, or pilot?

A lot of combinations can exist. The traditional open source model (college student in his dorm room with a pack of beer bottles, coding whatever they feel like) is one extreme.
 
does it have any kind of second hand market?
Well, there are private hobbyists out there who keep old tech alive. On Discord, I've seen SGI enthusiasts who not only scour ebay for hardware, but actually maintain repos of software that has been compiled for Irix flavor of UNIX. And, there's still a nice market for secondhand HP's Z workstations.

For such things to survive, they kind of have to share their source code in a meaningful fashion. Just because there's not that many people around who are even interested in that stuff, let alone understand how to maintain it.

I've heard of https://www.computermuseumofamerica.org/ (in Roswell, GA), and https://www.computer-museum.org/ (Closed up, collection donated to https://www.sdsu.edu/ library).

Thing is, just about any field of expertise has a fascinating history, with much to be learned - but not very many enthusiasts willing to maintain the archives. At some point, you gotta decide if it's worth your own time and effort to maintain those archives.
 
The stuff that only large commercial entities can pull off (i.e bringing up a new cutting edge architecture in the kernel)
The port of the Linux kernel to Itanium (Itanic?) was mostly done by a single person (who worked in the same building I did, and I sometimes had lunch with them). It took less than a year, but intense effort. It relied heavily on another large group already having implemented a CPU and system simulator, because actual hardware was few and far between.

What about the old stuff that is pretty much obsolete? Does IBM really just landfill it all or does it have any kind of second hand market?
There is a small second-hand market for used mainframe and large server gear. A lot of it is handled by the vendor itself, with second-tier customers or internal groups getting used equipment for pennies on the dollar. Some of it happens through specialized used equipment dealers, or channel partners. Occasionally, you find large server gear showing up on eBay. But for the most part, large servers (including mainframes and storage systems) are bought by the end user, utilized as long as economically viable, and then destroyed. The destruction part is important, because disks, motherboards, SSDs etc. could all hold sensitive information, so they are typically not sold used, but instead shredded on site.

The rise of the cloud as the dominant form of IT deployment has not changed this logic, just centralized it in fewer places.
 
I've heard of https://www.computermuseumofamerica.org/ (in Roswell, GA), and https://www.computer-museum.org/ (Closed up, collection donated to https://www.sdsu.edu/ library).
Apparently, the garage mainframe is a thing.
The electric bill would frighten me; no lie.
 
I spent 19 years of my career on IBM mainframe (MVS systems programming). I'd be the first to say let's do it. And, I'd be one of the first to volunteer because of my knowledge of the architecture and of FreeBSD. But the rational side of me asks, to what end? There's not a lot of Linux on zSeries and I doubt there would be even a little interest for FreeBSD. There'd have to be a compelling reason that businesses would want to use FreeBSD on zSeries. Because, as much as you and I might want to do it, I certainly wouldn't be running one of these at home and I couldn't be persuasive enough to convince some enterprises to run it.

Nice idea but not practical unless some business feels they need it.
 
What's important should be not making FreeBSD or others to fully run on zSeries, but PC for end users (consumers) become just like zSeries in architecture (assure stability and security by hardware whenever it is NOT IMPOSSIBLE) in cheap prices. FreeBSD on zSeries would be an important step for the future to come, though.
 
With the power that even consumer-grade devices wield these days (a Threadripper costs just a few grand), it's difficult to make a business case for an up-to-date IBM zSeries mainframe, unless you're running something that's obviously beyond what a Threadripper/epyc and a few high-end GPU's can handle. Even major Hollywood studios do their renders on a farm of consumer-grade devices (albeit high-end stuff). And FreeBSD runs fine on even new-ish consumer stuff. I have high-end GPU (an RX 6900 XT), and had no issues with it under FreeBSD.

I think you gotta ask yourself, "What do I want to accomplish with that?". Most computing tasks can very well be accomplished by hardware that is far more easily available than a zSeries mainframe.

On Discord, I once asked someone why they are investing in a homebrew rackmount server. The answer was, "I want to play around and feel like an admin". I'm not gonna pass judgement, nothing wrong with that, if they have the time, skill, and money to pull it off, and that's what satisfies them, great. It's just that it's a different ballgame if you're a business, because (as old adage goes) time is money.
 
What's important should be not making FreeBSD or others to fully run on zSeries, but PC for end users (consumers) become just like zSeries in architecture (assure stability and security by hardware whenever it is NOT IMPOSSIBLE) in cheap prices. FreeBSD on zSeries would be an important step for the future to come, though.
IBM 360, 370, 370/XA, 370/ESA, and z/Architecture don't have a stack. The simple fact that they don't have a stack makes their O/S's (z/OS and z/VM) a little less penetrable. The programming paradigm on z/Architecture -- assembler and the code that compiled languages produce -- uses a linked list of save areas. While parameters are passed from function to function (subroutine in IBM Mainframe parlance) via an array, similar to argc, pointed to by a register (usually register 1). And instead of a stack the common practice of using registers 0, 1, 13, 14 and 15 for program linkage was adopted by all programmers.

Most C compilers on z/Series implement a stack using store and load instructions. In that sense programs compiled by them are just as vulnerable as any other. The fact that z/Architecture is more secure is partially because of this and because either of the O/S's and their apps aren't laid out like modern apps and O/S's. Put FreeBSD, compiled in C, on it, and the stack emulation would be just as vulnerable.

Other vulnerabilities not reliant on a stack are also more difficult because memory is laid out much differently. I'm not saying it's invulnerable but the fact that it's so different makes the O/S and the underlying architecture less vulnerable both through the architecture and because most people don't understand it.

Personally, I think architectures like ARM and RISCV are the future, especially if they're more cost effective than what we currently have today. That's why ARM is so popular with vendors and manufacturers.

On a personal note, I could do a lot more and more elagantly with IBM 360 & 370 machine language (assembler) than Intel assembler or even C. Instructions used to translate EBCDIC to ASCII and back could be fed a table of multiple of four offsets. Kind of like building a case construct (in C) but in machine language using a single instruction followed by a list of branches: A branch table. Something that can't be done on any other architecture without many more instructions. But I digress.

It's a different architecture. The vulnerabilities are different.
 
I have always been meaning to have a play with z/OS in Hercules. Mainly to experiment with the UNIX subsystem and see what it provides.

I suppose if FreeBSD was ported to the zseries, it would probably be the first proper UNIX-based OS available on it?
 
One could conclude from above post by cy@ that not having the stack as a basic data structure makes z/OS more secure, even though it's not quite that simple.

Also, programmatically speaking, where would 'lack of a stack data structure' be a problem? Most programming languages offer a stack as a basic data structure, or it can be implemented using an array or a linked list using the language itself. I guess we are talking about different levels of programming (systems programming vs application programming, where application can mean basic userland utilities like ls(1).

Systems programming is an important skill to maintain and keep alive, no doubt about it. As long as people are coming out with architectures that address some issue with predecessors (be it cost, speed, amount of electricity used, and more), there's gonna be a need for compilers and assemblers that properly take advantage of those improvements and hopefully don't break the higher-level software too badly.
 
To Cy's point: Yes, it is a different architecture, pretty radically so. To begin with, it has a LARGE number of registers compared to Intel, which makes the way programs are implemented (either by the programmer when using assembly, or by the compiler) quite different. It also has a VERY complex and orthogonal instruction set (Intel is CISC, but not terribly orthogonal), and again modern compiler design has learned a lot from RISC. And the traditional convention for subroutine entry/exit (function calls in modern language) is neither reentrant nor recursive, but highly efficient; but for C programs, that doesn't matter, as they use a stack-based implementation.

The way to use the stack in the 360...z architecture is ultimately not radically different from microprocessor instruction sets. No, there are no push and pop instructions. Instead, one sets one of the many registers aside as a stack pointer, and then uses increment/decrement instructions together with register relative addressing to manipulate the stack. Since register instructions are very fast on these CPUs (just like most modern CPUs, which are not instruction but memory limited), this works just fine. There is also a set of special instructions called BAKR, PC and PR to manipulate the stack directly, but they have a well-deserved reputation for being slow. The trick is that the compiler and linker have a very smart calling convention which is register-aware, so native C and C++ code actually runs very fast (and traditional PL/1, FORTRAN and COBOL code runs even faster).

Given the very different (and highly controlled!) memory layout, very different instruction set, it is indeed a more secure architecture. The biggest reason for that is, however, different: There aren't very many mainframes; they typically don't run Linux (nor any other OS hackers are familiar with), they tend to be very well managed and secure, since they are typically in shops that can afford good staffing levels; they typically hold very sensitive data (often banking and insurance); and hackers can't afford to buy one to practice on.
I suppose if FreeBSD was ported to the zseries, it would probably be the first proper UNIX-based OS available on it?
No, Linux is currently available and supported; AIX used to be available, but I don't know whether it still is.

Also, programmatically speaking, where would 'lack of a stack data structure' be a problem?
We're not talking about the stack as a data structure used within a program. We're talking about the fact that C and C++ have to be based on a stack during function calls, since by language definition functions can be recursive: a function can call itself, perhaps indirectly through other functions. In most traditional languages (such as PL/1, FORTRAN and COBOL) that is explicitly prohibited. The stack has to hold CPU registers during a recursive call, and the automatic variables. Because function calling is such a vital part of the performance of C-based programs, it has to be very efficient.
 
Back
Top