Solved Guidance for Windows/Linux User new to FreeBSD

I'm aware of ZFS on Linux, but as a kernel module it will never achieve the full potential performance of one built into kernel.
This is false, as others have already said.

The highest-performing file system on Linux that I know of is not only a kernel module (it can not even be compiled statically into the kernel), it even runs most of its code in user space, using a dedicated file system daemon (no, it does not use FUSE). Similarly, the second-highest-performing file system is also a module, and last time I looked (which was admittedly about 10 years ago) also relied on a large user-space library.

Modules make no performance difference at runtime, in practice. Whether they are a good or a bad thing is mostly a question of system administration, and having more separate moving parts, whether having all your eggs in one basket (both of which have positives and negatives).

The FUSE system has given user-space file systems a terrible reputation, but we have to remember that file system access either in user space or with the help of user space does not require FUSE. And the design goals of FUSE were not high performance, but flexibility, ease of development, and in particular use as a systems research tool (look where it came from). FUSE is great at what it does ... and not so good at things it's not intended for.

It is a tragedy that they cannot sort out all these incompatible license scheme so that could change.
To begin with, this is not a simple engineering question, but one that involves lots of money, property rights, lawyers. Second, there are humans involved. Some of the humans in this particular drama are some of the most unreasonable, despicable and crazy humans in the computer industry.

The biggest problem with earlier ZFS releases was that every operating system and distribution took the basic code and modified it to include their own preferred extras which meant that it was very difficult to have portability and compatibilty that means you could take a ZFS volume and safely access it with FreBSD, any other BSD Distro, OpenIndiana and related distributions, Linux Distributions and Windows without fear of corruption.
In their defense: Writing a file system that has a single on-disk format and can be accessed from multiple OSes, which are highly different in their design, is very difficult. In particular for a file system that has all the complexity that we require or want today. It can be done, but it requires not only significant manpower, but also really good engineering planning and solid design up front. The original ZFS had that, but that version came to an end when Sun died an untimely death. The fragmented versions were done in typical open source manner, meaning without good centralized organization. This makes sense ... for example, the people who ported ZFS to operating system "X" were not paid for, incentivized by, or interested in making it work on operating system "Y". Matter-of-fact, with a lot of the open source people being motivated by hatred for other OSes (people disgusted with Windows is the biggest driver of working on Linux, and systemd is probably now the biggest driver of people using other FOSS operating systems), so there is little justification to make things better for other OSes.

Data integrity and portability is more important to me than anything else and the lack of a proper standard previously gave me pause in considering the File System for usage
In terms of data integrity (and durability and reliability), ZFS is by far the best OS one can use among the free ones. That alone should be a reason for most people to use it. The lack of portability is shared with most other good OSes: You should not use ext4, XFS or btrfs on any OS other than Linux (the various black-box implementations used on other OSes are experimental tools, to be considered only for read only), APFS on anything other than a Mac, and NTFS on anything other than Windows (again, the non-Windows versions are unsafe). It would be nice if there was a shareable modern file system for many OSes, but that's not the world we live in.
 
… good engineering planning and solid design up front. The original ZFS had that, but that version came to an end when Sun died an untimely death. The fragmented versions were done in typical open source manner, meaning without good centralized organization. …

I should add: OpenZFS is well-organised.

We don't have the central point that was envisaged in the early days of OpenZFS. We have a central point that was agreed upon through people and groups coordinating and organising themselves.
 
In terms of data integrity (and durability and reliability), ZFS is by far the best OS one can use among the free ones. That alone should be a reason for most people to use it. The lack of portability is shared with most other good OSes: You should not use ext4, XFS or btrfs on any OS other than Linux (the various black-box implementations used on other OSes are experimental tools, to be considered only for read only), APFS on anything other than a Mac, and NTFS on anything other than Windows (again, the non-Windows versions are unsafe). It would be nice if there was a shareable modern file system for many OSes, but that's not the world we live in.
I disagree about btrfs, the real statement should be: you never, ever should use btrfs for anything else than throwaway data! It will shoot you in the foot sooner or later, and definitely will eat your data sometime.
 
If you are looking for some books:

Absolute FreeBSD has already been pointed out, but there is more ...

FreeBSD Mastery - Storage Essentails
FreeBSD Mastery - ZFS
FreeBSD Mastery - Advanced ZFS

I can recommend all four them, great investment.
 
I don't recall anyone posting a link in this thread to numbers to back up the assertion that ZFS performs better in either scenario. Can somebody do that, please?
Numbers aren't really needed: it flies in the face of what kernel modules are. When you load a module, be it on Linux or FreeBSD, you're inserting code into the running kernel image and all the code contained within the module has the exact same privileges and capabilities as all other parts of kernel code. It runs in kernel space, in ring 0 on x86, and there are no performance drawbacks.

The only major drawback of using OpenZFS on Linux is its development as an out-of-tree kernel module. All this amounts to is that sometimes Linux changes its kernel interfaces so that you cannot run ZFS on a new release until work is done on OpenZFS to port to the new version (more often than not, this work is done well in advance, when Linux releases are in RC stage, rather than waiting post-release). If you stick to packages provided by Linux distributions (eg, the ZFS modules provided by Debian or Ubuntu), you probably won't even notice that hitch.

OpenZFS on FreeBSD has a significant advantage simply because developers aren't going to break the interfaces it depends on (or, at least, change both sides at once: core FreeBSD and OpenZFS). It's a good bet most of the developers are running ZFS anyway.
 
The only major drawback of using OpenZFS on Linux is its development as an out-of-tree kernel module. All this amounts to is that sometimes Linux changes its kernel interfaces so that you cannot run ZFS on a new release until work is done on OpenZFS to port to the new version (more often than not, this work is done well in advance, when Linux releases are in RC stage, rather than waiting post-release). If you stick to packages provided by Linux distributions (eg, the ZFS modules provided by Debian or Ubuntu), you probably won't even notice that hitch.
Are you saying that the Linux kernel API keeps changing that easily? I'd think that if that were the case, OpenZFS devs would give up after awhile trying to keep up. And same would go for any other out-of-tree kernel module. Considering that Linux has surprisingly good support for recent hardware (an area where FreeBSD is lagging, unfortunately), I have a hard time buying the assertion that an out-of-tree kernel module has difficulty maintaining API/ABI compatibility.
 
I once had the port of OpenZFS not work with FreeBSD-CURRENT (boot failure).

I didn't save notes on the incident, because it was so easily worked around, but IIRC it was due to things being outdated on the port side (not the OS).
 
Are you saying that the Linux kernel API keeps changing that easily?

Yes

I'd think that if that were the case

It is, and you need only pay attention to the OpenZFS development to see. Almost every release note contains a mention of Linux kernel version compatibility: https://github.com/openzfs/zfs/releases

The benefits of ZFS are clearly outweighing the disadvantages of developing an out-of-tree kernel module. Yes, it annoys mainline LInux developers because they prefer that out-of-tree modules aren't a thing, but for as long as they err on the assumption that CDDL and GPL cannot be combined, it's reality. (And honestly, OpenZFS's development model at present probably benefits them more than ZFS being integrated in the mainline Linux tree...).

Considering that Linux has surprisingly good support for recent hardware (an area where FreeBSD is lagging, unfortunately), I have a hard time buying the assertion that an out-of-tree kernel module has difficulty maintaining API/ABI compatibility.

Out-of-tree modules are exceedingly rare. Nearly the entirety of Linux's hardware support is from in-tree drivers in the Linux kernel. It's useless as an example of maintaining API compatibility (when the API is changed, the drivers are changed at the same time).
 
Are you saying that the Linux kernel API keeps changing that easily?
Obviously a personal opinion but: Almost everything in Linux is changing "that easily". That's one of the main "issues" I have with Linux. It's a maintenance nightmare - especially if you're a small company or individual developing stuff to work on Linux. The frequency of "breaking changes" is just ridiculous in my opinion. I stopped writing drivers and other kernel-facing stuff for Linux a long time ago.
I find it especially amusing if people compare the "development manpower" between Linux and FreeBSD and conclude that FreeBSD has less going on: Yes, that is true, but if things are well designed rather than just hastily implemented and then changed every X months there's also not the need for a battalion of developers.
 
I find it especially amusing if people compare the "development manpower" between Linux and FreeBSD and conclude that FreeBSD has less going on
Indeed.

Windows and macOS have a fraction of developers than even FreeBSD and yet they seem to be seen as cutting edge.
Arguably it could be that they are *not* cutting edge and not moving forward as quickly as open-source competitors *or* they are indeed better planned so there is wasted manpower so they are moving forward at a good pace with a smaller team. This does tend to be the case with paid development.
 
Out-of-tree modules are exceedingly rare. Nearly the entirety of Linux's hardware support is from in-tree drivers in the Linux kernel. It's useless as an example of maintaining API compatibility (when the API is changed, the drivers are changed at the same time).
Linus Torvalds hates driver APIs, there is none in Linux. Most kernel developers have the same stance.

Here's some official statement from a major senior Linux kernel developer (below in italics), Greg Koah-Hartman, why you don't want a binary kernel nor stable kernel interface. Why it is utter nonsense. And basically why they know better than you do what you really need.

IMHO the lack of such a driver interface is one of the major reasons why Linux on the desktop never took off, because maintaining out of the kernel modules is just too burdensome for most hardware developers. Nvidia does it since ages the same game: new kernel version needs new driver. And every driver will only run on a certain ranges of kernel versions, that's it. Another reason why Linux never took off is that there are way too many distributions.

Executive Summary

You think you want a stable kernel interface, but you really do not, and you don’t even know it. What you want is a stable running driver, and you get that only if your driver is in the main kernel tree. You also get lots of other good benefits if your driver is in the main kernel tree, all of which has made Linux into such a strong, stable, and mature operating system which is the reason you are using it in the first place.

Intro

It’s only the odd person who wants to write a kernel driver that needs to worry about the in-kernel interfaces changing. For the majority of the world, they neither see this interface, nor do they care about it at all.
First off, I’m not going to address any legal issues about closed source, hidden source, binary blobs, source wrappers, or any other term that describes kernel drivers that do not have their source code released under the GPL. Please consult a lawyer if you have any legal questions, I’m a programmer and hence, I’m just going to be describing the technical issues here (not to make light of the legal issues, they are real, and you do need to be aware of them at all times.)
So, there are two main topics here, binary kernel interfaces and stable kernel source interfaces. They both depend on each other, but we will discuss the binary stuff first to get it out of the way.

Binary Kernel Interface

Assuming that we had a stable kernel source interface for the kernel, a binary interface would naturally happen too, right? Wrong. Please consider the following facts about the Linux kernel:
  • Depending on the version of the C compiler you use, different kernel data structures will contain different alignment of structures, and possibly include different functions in different ways (putting functions inline or not.) The individual function organization isn’t that important, but the different data structure padding is very important.
  • Depending on what kernel build options you select, a wide range of different things can be assumed by the kernel:
    • different structures can contain different fields
    • Some functions may not be implemented at all, (i.e. some locks compile away to nothing for non-SMP builds.)
    • Memory within the kernel can be aligned in different ways, depending on the build options.
  • Linux runs on a wide range of different processor architectures. There is no way that binary drivers from one architecture will run on another architecture properly.
Now a number of these issues can be addressed by simply compiling your module for the exact specific kernel configuration, using the same exact C compiler that the kernel was built with. This is sufficient if you want to provide a module for a specific release version of a specific Linux distribution. But multiply that single build by the number of different Linux distributions and the number of different supported releases of the Linux distribution and you quickly have a nightmare of different build options on different releases. Also realize that each Linux distribution release contains a number of different kernels, all tuned to different hardware types (different processor types and different options), so for even a single release you will need to create multiple versions of your module.
Trust me, you will go insane over time if you try to support this kind of release, I learned this the hard way a long time ago…

Stable Kernel Source Interfaces

This is a much more “volatile” topic if you talk to people who try to keep a Linux kernel driver that is not in the main kernel tree up to date over time.
Linux kernel development is continuous and at a rapid pace, never stopping to slow down. As such, the kernel developers find bugs in current interfaces, or figure out a better way to do things. If they do that, they then fix the current interfaces to work better. When they do so, function names may change, structures may grow or shrink, and function parameters may be reworked. If this happens, all of the instances of where this interface is used within the kernel are fixed up at the same time, ensuring that everything continues to work properly.
As a specific examples of this, the in-kernel USB interfaces have undergone at least three different reworks over the lifetime of this subsystem. These reworks were done to address a number of different issues:

  • A change from a synchronous model of data streams to an asynchronous one. This reduced the complexity of a number of drivers and increased the throughput of all USB drivers such that we are now running almost all USB devices at their maximum speed possible.
  • A change was made in the way data packets were allocated from the USB core by USB drivers so that all drivers now needed to provide more information to the USB core to fix a number of documented deadlocks.
This is in stark contrast to a number of closed source operating systems which have had to maintain their older USB interfaces over time. This provides the ability for new developers to accidentally use the old interfaces and do things in improper ways, causing the stability of the operating system to suffer.
In both of these instances, all developers agreed that these were important changes that needed to be made, and they were made, with relatively little pain. If Linux had to ensure that it will preserve a stable source interface, a new interface would have been created, and the older, broken one would have had to be maintained over time, leading to extra work for the USB developers. Since all Linux USB developers do their work on their own time, asking programmers to do extra work for no gain, for free, is not a possibility.
Security issues are also very important for Linux. When a security issue is found, it is fixed in a very short amount of time. A number of times this has caused internal kernel interfaces to be reworked to prevent the security problem from occurring. When this happens, all drivers that use the interfaces were also fixed at the same time, ensuring that the security problem was fixed and could not come back at some future time accidentally. If the internal interfaces were not allowed to change, fixing this kind of security problem and insuring that it could not happen again would not be possible.
Kernel interfaces are cleaned up over time. If there is no one using a current interface, it is deleted. This ensures that the kernel remains as small as possible, and that all potential interfaces are tested as well as they can be (unused interfaces are pretty much impossible to test for validity.)

What to do

So, if you have a Linux kernel driver that is not in the main kernel tree, what are you, a developer, supposed to do? Releasing a binary driver for every different kernel version for every distribution is a nightmare, and trying to keep up with an ever changing kernel interface is also a rough job.
Simple, get your kernel driver into the main kernel tree (remember we are talking about drivers released under a GPL-compatible license here, if your code doesn’t fall under this category, good luck, you are on your own here, you leech). If your driver is in the tree, and a kernel interface changes, it will be fixed up by the person who did the kernel change in the first place. This ensures that your driver is always buildable, and works over time, with very little effort on your part.
The very good side effects of having your driver in the main kernel tree are:

  • The quality of the driver will rise as the maintenance costs (to the original developer) will decrease.
  • Other developers will add features to your driver.
  • Other people will find and fix bugs in your driver.
  • Other people will find tuning opportunities in your driver.
  • Other people will update the driver for you when external interface changes require it.
  • The driver automatically gets shipped in all Linux distributions without having to ask the distros to add it.
As Linux supports a larger number of different devices “out of the box” than any other operating system, and it supports these devices on more different processor architectures than any other operating system, this proven type of development model must be doing something right :)
 
I find it especially amusing if people compare the "development manpower" between Linux and FreeBSD and conclude that FreeBSD has less going on: Yes, that is true, but if things are well designed rather than just hastily implemented and then changed every X months there's also not the need for a battalion of developers.

Windows and macOS have a fraction of developers than even FreeBSD and yet they seem to be seen as cutting edge.
Arguably it could be that they are *not* cutting edge and not moving forward as quickly as open-source competitors *or* they are indeed better planned so there is wasted manpower so they are moving forward at a good pace with a smaller team. This does tend to be the case with paid development.
If Windows and MacOS have a fraction of the devs that even FreeBSD has, then how come they are the golden standard for hardware support? As an example, a WinXP driver for a scanner can actually run on Win10 even without being installed in 'Compatibility Mode' (Personal example, BTW).

One would think that with a bigger crowd of devs working on Open Source code, there's a better chance that a well-designed driver for your hardware even exists. But with my scanner (Epson Perfection V19, not the most obscure model by Epson), I know that's not the case. ? FWIW, I could never get my scanner working under Linux, either, and even though SANE has more drivers now than then, my scanner is still unsupported under SANE. Insane, I know.
 
If Windows and MacOS have a fraction of the devs that even FreeBSD has, then how come they are the golden standard for hardware support? As an example, a WinXP driver for a scanner can actually run on Win10 even without being installed in 'Compatibility Mode' (Personal example, BTW).
Strangely I experience the opposite. Windows 10 is missing loads of drivers when you try to install it on an XP-era machine. Since the vendors don't provide them as 3rd party drivers either; you are our of luck. FreeBSD however still supports even Windows 98 era hardware.

So I would say that Windows and macOS have so few developers that their backwards compatibility of hardware greatly suffers compared to Linux / BSD.

Likewise my old Canoscan LiDE works fine on Linux/BSD with the SANE drivers. It stopped working on Windows 7. Of course I have had printers that never got open-source drivers. But that is due to lack of hardware documentation. You could chuck 1000 developers at a problem and still end up with no results if you don't have the correct hardware info from the vendor; they can't all just trial and error send random things on the pins! ;)
 
If Windows and MacOS have a fraction of the devs that even FreeBSD has, then how come they are the golden standard for hardware support?

Are you serious? Mac easily has the worst hardware support of the lot of 'em (compared to FreeBSD, Linux, Windows), while Linux has a clear advantage. I'd wager Windows in second place, but there's no way it could be number one.

As an example, a WinXP driver for a scanner can actually run on Win10 even without being installed in 'Compatibility Mode'

Honestly I highly doubt it. MS changes driver models more often than you change underpants. Windows is notorious for dropping compatibility with older drivers. (it wasn't even that long ago that I heard someone in-person complaining about how their W7/8 driver couldn't work on 10.)
 
Strangely I experience the opposite. Windows 10 is missing loads of drivers when you try to install it on an XP-era machine. Since the vendors don't provide them as 3rd party drivers either; you are our of luck. FreeBSD however still supports even Windows 98 era hardware.
I don't get you, kpedersen ... Are you trying to install stuff clearly labeled for win10 on a winXP machine??? Then of course you're gonna have problems. If you go the other way (winXP-era drivers on a win10 machine), that's less problematic. Good luck finding a computer that still runs winXP, before trying to install the latest scanner or printer on that. There's such a thing as backwards compatibility, y'know. What you're trying to pull off is forward compatibility. ?
 
I don't get you, kpedersen ... Are you trying to install stuff clearly labeled for win10 on a winXP machine???
I am trying to use an old machine. The "labels" came off years ago ;)

Is there a reason why Windows 10 "clearly" won't work and yet the very latest FreeBSD manages just fine? Does Microsoft get a free ride when it comes to backwards / forwards compatibility? Not a chance. That is the wrong mindset.

The specific machine is a 2009 Thinkpad (X61). These things work great and are nowhere nearly ready for the landfill. The latest Linux and BSD works great on them. Windows 10 fails to support it (Intel GMA 965 is just one missing driver).

What you're trying to pull off is forward compatibility. ?
I pull it off very nicely with FreeBSD. It's not hard either. I just pop in the disk and select install!
 
Here's some official statement from a major senior Linux kernel developer (below in italics), Greg Koah-Hartman, why you don't want a binary kernel nor stable kernel interface. Why it is utter nonsense. And basically why they know better than you do what you really need.
Ah Greg K-H doing what he does best, spreading FUD. Gotta love how that screed basically claims to invalidate 30 years of good software engineering practice. He doesn't even try to hide the Orwellian nature of his spin "you think you want X, but trust me, and not your lying eyes."

Greg K-H is also infamous for being Lennart's chosen to get DBUS into the kernel. Apparently that was a bridge too far even for Linus.
 
I am trying to use an old machine. The "labels" came off years ago ;)

Is there a reason why Windows 10 "clearly" won't work and yet the very latest FreeBSD manages just fine? Does Microsoft get a free ride when it comes to backwards / forwards compatibility? Not a chance. That is the wrong mindset.

The specific machine is a 2009 Thinkpad (X61). These things work great and are nowhere nearly ready for the landfill. The latest Linux and BSD works great on them. Windows 10 fails to support it (Intel GMA 965 is just one missing driver).


I pull it off very nicely with FreeBSD. It's not hard either. I just pop in the disk and select install!
Trying to install a modern OS on older hardware and discovering it works means that the modern OS has backward compatibility. If win10 on an old machine has a hard time with some older hardware, it's not that hard to find a winXP driver and install it using 'compatibility settings'. Win10 does have limited backward compatibility, that's not surprising. But it will support old drivers easy.
--
Try installing graphics/drm-kmod on FreeBSD 5.2-RELEASE. That's an example of 'forward compatibility' that you seem to be trying to pull off here (if I were to take your posts at face value). Probably not impossible if you have the right hardware with 5.2-RELEASE actually running on it. But if you have 5.2-RELEASE installed on hardware of the same age - good luck getting graphics/drm-kmod to run - that port is aimed at much newer cards than from 2004. Hope I make sense with my example.
 
If Windows and MacOS have a fraction of the devs that even FreeBSD has, then how come they are the golden standard for hardware support? As an example, a WinXP driver for a scanner can actually run on Win10 even without being installed in 'Compatibility Mode' (Personal example, BTW).

One would think that with a bigger crowd of devs working on Open Source code, there's a better chance that a well-designed driver for your hardware even exists. But with my scanner (Epson Perfection V19, not the most obscure model by Epson), I know that's not the case. ? FWIW, I could never get my scanner working under Linux, either, and even though SANE has more drivers now than then, my scanner is still unsupported under SANE. Insane, I know.
Because it's not true that Windows and MacOS have a fraction of the devs that FreeBSD has. The opposite is true: FreeBSD has only a fraction of the developers Windows has. At Microsoft 7 ~2000 developers were working simultaneously. I doubt that as many developers are working at FreeBSD.

Aside that: market share. The bigger the market share, the higher the need for a hardware vendor to write drivers for a certain operating system. Which means on the scanner market you are dead when not providing Windows and MacOS drivers. For the rest most scanner manufacturers don't care, because no demand.
 
Ah Greg K-H doing what he does best, spreading FUD. Gotta love how that screed basically claims to invalidate 30 years of good software engineering practice. He doesn't even try to hide the Orwellian nature of his spin "you think you want X, but trust me, and not your lying eyes."

Greg K-H is also infamous for being Lennart's chosen to get DBUS into the kernel. Apparently that was a bridge too far even for Linus.
Well, yes. But please note that this statement listed on kernel.org, so the official mission statement for the Linux kernel about that matter.

Basically Greg K-H has this arrogant GNOME attitude, namely "we do know better what you really need than you do." And wants Linux to be the Hotel California of hardware drivers. Drivers need to check in, but never can checkout...
 
If Windows and MacOS have a fraction of the devs that even FreeBSD has, then how come they are the golden standard for hardware support? As an example, a WinXP driver for a scanner can actually run on Win10 even without being installed in 'Compatibility Mode' (Personal example, BTW).
"Hardware support" is a very ill-defined metric anyways. What are you referring to, a system coming with lots of drivers, or lots of manufaturers offering drivers for that system?

Your second sentence deals with API stability / backwards compatibility. You might want to call Windows the "gold standard" for that, they take it to an extreme. But this comes at a price as well. Some of the sillyness I found is kind of amusing (like: don't use DEFAULT_GUI_FONT, it's not the default GUI font ?*). FreeBSD also has a somewhat strong focus on API stability. Linux is special here, they aim to keep the userspace(!) ABI of the kernel(!) somewhat stable, anything else breaks every few seconds. API stability helps manufacturers to offer their own drivers for a system. Whether they want to do it at all depends on whether they expect their target group of buyers to actually use that system.

So, correlating "hardware support" with "number of active devs" can only make some sense if you're talking about drivers that are part of the system only. Lots of hardware isn't well documented, so the only way to learn how it works is re-engineering existing drivers, which makes creating your own drivers quite difficult oftentimes.

---
*) this was meant to describe the default GUI font. It was "Arial" for Windows 9x. Obviously, a lot of software back then just assumed Arial's font metrics and shows a broken UI when a different font is used. So, MS decided to never change that again and introduce a new mechanism to determine the "real" default font instead. The notion of backwards compatibility here includes compatibility with software doing stupid and wrong shit. Things like that pile up in Windows' APIs. Their solution is the onion: Add another layer hiding all that crap (e.g. MFC, nowadays .NET) and expect application devs to just use that and never look into the ugly mess below...
 
Try installing graphics/drm-kmod on FreeBSD 5.2-RELEASE. That's an example of 'forward compatibility' that you seem to be trying to pull off here
Running a new driver on an old operating system isn't really relevant to what I am doing.

Win10 does have limited backward compatibility, that's not surprising
Why is it not surprising? Other operating systems manage. Other than the fact that open-source platforms have a worldwide pool of developers working on stuff like this, which is pretty much the exact point I am making. Microsoft's core windows development team is tiny compared to open-source.

I am running a recent FreeBSD on an old (but not ancient) hardware. You can't dream of this kind of support (driver or otherwise) from Windows 10.
 
For ancient hardware there's NetBSD, of course, which has then another problem: it will probably run on it, but has not so many drivers.
*) this was meant to describe the default GUI font. It was "Arial" for Windows 9x. Obviously, a lot of software back then just assumed Arial's font metrics and shows a broken UI when a different font is used. So, MS decided to never change that again and introduce a new mechanism to determine the "real" default font instead. The notion of backwards compatibility here includes compatibility with software doing stupid and wrong shit. Things like that pile up in Windows' APIs. Their solution is the onion: Add another layer hiding all that crap (e.g. MFC, nowadays .NET) and expect application devs to just use that and never look into the ugly mess below...
Which is probably why BeOS back in the good old days was blazingly fast on the same hardware compared to other OSes, because it had no compatibility baggage it had to care about. It was completely devoid of all that cruft which Windows and MacOS do carry around, already did back then.

Obviously Microsoft cares a lot more about compatibility between major versions though than Apple does. Under Apple it is common practice to delay the update on the newest release (which comes yearly) until all the major programs you need to work with have been updated to run under it.
 
Back
Top