First off, ralph, I respectfully ask you stop acting at me with a patronizing tone, as if I myself don't understand these things intrinsically.
If it is so good, why isn't it the default on Linux?
It is, on several distros. Politics and the fact that ext is a traditional filesystem keep ext4 on most, but almost all of them offer xfs as an option, and the ones not offering it are not worth their salt.
RHEL, one of the most prestigious/gold standard GNU/Linux distros, uses it and RedHat heavily develops it now that SGI hasn't been a thing in 10+ years ( I don't count the successor company, Silicon Graphics International, as the same )
(Personal comment: I actually think that XFS is a pretty good file system, and I know some of the people who developed it personally, some are neighbors, some are colleagues. One of its great advantages is that it was developed by serious professional software engineers, not by fly-by-night scam artists / murderers like Hans Reiser, unpaid volunteers like early Linux file systems, or nut cases like some of the hip file systems such as Hammer or Butter. That particularly shows in the quality and readability of its source code and documentation. But other Linux file systems have had hundreds of person-years more tuning and development, while XFS has been mostly dormant in development for two decades, so it has fallen way behind. In particular, the ext series of file systems is today extremely good, very well understood and documented, and has fine performance for most workloads. There is a reason it is the default file system for professional users that care about reliability and performance.)
Hans Reiser was a brilliant mind troubled with mental health problems, problems that are all the more relevant in the modern day. I'm not sympathetic to murderers, don't misunderstand. But I can see a talented mind, and how Hans probably had underlying causes that led to him murdering his wife and ruining his kids lives. Nobody does that unless there's something deeply wrong with them mentally.
XFS is not dormant -- it's being actively supported and developed by RedHat. You're what, 5-6 years out of date? When did RHEL 7 come out, that's when it gained relevancy again. The only other relevant in-tree players are ext4, which is an abortion of a filesystem with a codebase worse than even UFS2, and btrfs, which after over a decade in development is still a pile of rubbish. Reiser3 (4 is not in kernel), ext3, JFS etc. are far lower on the development totem pole. What did I miss?
That statement is nonsense. Not only that, it is offensive nonsense. While it is true that UFS has, like XFS, not terribly much tuning and adaptation to modern IO hardware, it is still exceedingly well engineered (you see the care that was put into coding it, by professionals rather than amateurs), and has reasonable performance for a large fraction of workloads. From a reliability viewpoint, it does very well, with remarkably few data loss incidents if you stress it.
Anecdotes man. Is there a white paper for UFS like XFS has? Have you /looked/ at the codebase (if you're not a dev) and tried to understand both the ffs and ufs layers?
I take umbrage to "many". It does well for mixed workstation and development workloads. Being log-structured, it has problems with certain update-in-place workloads, but those are today getting to be corner cases (relatively little storage is used for things like MySQL transaction processing).
I can name many workloads for where ZFS is absolutely unnecessary/ill-suited:
DNS servers, which oftentimes can run on tiny (sub-128M) memory footprints, need fast network response and updates. ZFS doesn't do well in a tiny space like that, and UFS, while it can perform /okay/ the lack of sophistication in its design hamstrings it compared to XFS, in my testing.
Embedded web servers. I run a lot of ARM SBCs in NetBSD, for instance, and I'm basically running simple httpd instances. ZFS is overkill.
Archival servers (not the same as storage servers) because ZFS's featureset means more pool scrubbing and other I/O intensive tasks on a more regular basis. A typical archival server as I have it has two main functions: Redundancy for primary storage pool servers, and ultrium backups. Both of these I want a simple, reliable filesystem that isn't going to take up much I/O time, and also not act as an egg in the same basket -- my primary storage pool servers are 12.2 (won't be going to 13, I'll move them to illumos or something or even off ZFS rather than go to ZoL based trash for reasons I'm not gonna express here)
A simple travel laptop such as my X131e. I really really don't need ZFS on that.
Embedded applications, but these aren't really relevant here.
XFS is not, fwiw, a stellar leader in performance. It's a good all-around filesystem with extensive existing documentation, and unlike say btrfs, hammer2, or other non-ZFS CoW systems that are at least halfway mature, it has no processor interop issues. On btrfs, you can't load a volume if it was made on a different system with a different page size,for instance. ZFS was remarkably decent here, since it was designed for both x86 and SPARC, fwiw, but that's a nonsequitur. The other existing options are JFS, ext4, Reiser3 for decent, in-kernel filesystems that are mature. ext4 is unsuitable as it's less well-documented and heavily tied to the Linux kernel, so reimplementing it would be a bear. Reiser3 is slow and has issues with its tail-packing routines. It's basically abandoned. JFS is the dormant one, it's a port of the AIX filesystem JFS2. It's a great filesystem for what it is, but I would never put it in the same category as XFS. And it's less-well developed.
If you have a better idea of what to replace UFS2 with, by all means, the jury is in session.
That statement is also completely nonsensical. Of all file systems available for free, it is pretty much the only production-ready one that offers checksums. That is not overkill, it is today a vital part of getting reliability up to where it needs to be. The integration with RAID is also a highly desirable feature (which again improves real-world reliability, by reducing rebuild times), and the ease of administration when using interesting storage setups (like mirroring or RAID or snapshots or remote synchronization). In my (not humble) opinion, everyone who cares about data durability should be using checksums, and that makes ZFS the opposite of "overkill".
Not everyone needs checksums, and I don't know what universe you live in where your data flips bits on a regular basis. If it does, and it corrupts stuff, that's probably not the OS/filesystem's fault. I ran well before I even ran ZFS in production RAID10 fiberchannel array of XFS stuff. Never had a problem.
File systems use memory for caching and buffering. Fact of life. You want performance? You give your file system memory. With ZFS, that memory usage is visible much more explicitly, and can be tuned. You can run ZFS with very little memory (I only have 3 GiB on my system), and performance will be adequate, comparable to other file systems on low-memory hardware.
All filesystems can be tuned for memory usage. ZFS is still a pig, and wirth's law is on my side here. I can run ZFS on 512M sure, but 32M? 8? Nope. And yes, I have systems with that little of memory running UNIX-type OSes. They're not new, for sure, but wirth's law is a biggie here.
ZFS consistently uses more memory for normal operation on average than other FSes. If your response to my arguments and a case for something to replace or even just complement the selection we have for FreeBSD is "JUST USE ZFS" or "UFS IS FINE AND YOU CAN'T OFFER PROOF" I'm sorry, this is going to stall really quickly into a nasty deadlock.
ZFS is a decent filesystem, but I won't accept that kind of an answer. It's a cop-out.
To be clear: I have nothing against porting XFS to FreeBSD, if someone wants to do it. A starting point would have to be checking the license situation; if XFS is today encumbered by the GPL or an even more restrictive license, this might be a non-starter. A native and well-maintained XFS implementation would solve the common problem of wanting to share a disk between Linux and FreeBSD. But let's not start that project by insulting the existing file systems, in particular with farcical arguments.
XFS is only encumbered by GPL for the Linux kernel drivers and xfsprogs. This does not preclude a port of XFS to another OS using an entirely different license. This is legal.
From the docs out there, it's entirely possible for a small team of people to make an XFS driver for FreeBSD that is BSD-licensed and conforms to directory structure v2 (I think Linux is on v4 now, IRIX stopped at v2) -- restoring/adding v3/v4 support is not trivial but not impossible either by simply reverse engineering the data structure in ghidra or something, having someone document it, then pass that to the other members of the team. The same team, once they finish a kernel port, could build a BSD-licensed xfsprogs.
I'm not a copyright attorney, but there's nothing illegal about taking documentation and reimplementing something. It's done in the BSDs all the time with Linux-encumbered code.