Other XFS on BSD

My application is a shipborne computer. This system runs off the ship's batteries and is subject to
infrequent, random power failures, caused by emergency starts of the main engine.

For the past 15 years jfs has proven reliable in this application, with only one data loss during that entire period, of which 99.9% of the data was recovered.

To me, ZFS is overkill. The system has 4 drives, one of which is an SSD boot drive, the rest data spindles.

I found evidence that there is support for XFS.

What about XFS??

INDY
 
Two questions, both only out of curiosity: When you say "jfs", which file system do you mean (there are a handful that go by that name). And what makes you think that XFS is particularly good about handling sudden crashes?
 
Hello,

To me, ZFS is overkill.

I strongly disagree. ZFS has lots of abilities and they all come with no cost.
Even more - zfs is the native supported filesystem in FreeBSD. If XFS gives you something that your application cannot go without then you will have to do exotic setup with 3rd party and "African black magic" not very much supported drivers and modules.
And every time you try to upgrade it's the one thing that will probably not work after the upgrade, or it will require another exotic setup with manually compiled drivers for the new kernel.

But if it's just because XFS worked great for many years and you don't like ZFS just because it has many abilities and it looks like 'overkill' then I think you should reconsider.
It's always best to choice the most natural out of the box supported software if possible and only if not possible to jump into exotics.
 
XFS is the most well-documented Linux filesystem in probably existence.

I posted a bit of a reasoning on why UFS2 is trash.


ZFS is unsuitable for many workloads, and is absolutely overkill, despite gnoma's protests. It's a memory hog in all but some webserver applications. UFS2 continues to be the default for FreeBSD.

If someone wants to actually get a team going on porting XFS via reimplementation, I can supply documentation, and I would be more than willing to help. As it stands, I cannot do it by myself.
 
XFS on BSD

There is an XFS FUSE module but unfortunately no FreeBSD port yet.

There is (only) read-write userspace xfs via fusefs-lkl

sysutils/fusefs-lkl/ Full-featured Linux BTRFS, Ext4, XFS as a FUSE module

<https://cgit.freebsd.org/ports/comm...e?id=eda13277d8a19403893b67dc79512872a9f3832f> in 2015 was sponsored by EMC / Isilon Storage Division.

Detailed discussion of XFS might continue here



OT from XFS,

… ZFS is … a memory hog in all but some webserver applications.

OpenZFS in FreeBSD is:
  1. tunable






Use of memory for ZFS is sometimes misunderstood
 
If someone wants to actually get a team going on porting XFS via reimplementation, I can supply documentation, and I would be more than willing to help. As it stands, I cannot do it by myself.
Meh. You want it — you implement it. You aren't going to convince anyone already happy with their UFS/ZFS performance. Especially since you aren't interested in discussing specific workloads or give no indication which UFS design choices make it an inherently (as in "can't be optimized") poor performer.

By the way, ZFS is also not the fastest thing by design, so performance is not an argument against that FS unless it's really atrocious.
 
That's not entirely harmonious with the Foundation's call.
Foundation doesn't have unlimited resources. "UFS is slow" (outdated, whatever) is just not going to cut it. There are very few FS implementation experts there and they are busy working on OpenZFS.
 
XFS is the most well-documented Linux filesystem in probably existence.
If it is so good, why isn't it the default on Linux?

(Personal comment: I actually think that XFS is a pretty good file system, and I know some of the people who developed it personally, some are neighbors, some are colleagues. One of its great advantages is that it was developed by serious professional software engineers, not by fly-by-night scam artists / murderers like Hans Reiser, unpaid volunteers like early Linux file systems, or nut cases like some of the hip file systems such as Hammer or Butter. That particularly shows in the quality and readability of its source code and documentation. But other Linux file systems have had hundreds of person-years more tuning and development, while XFS has been mostly dormant in development for two decades, so it has fallen way behind. In particular, the ext series of file systems is today extremely good, very well understood and documented, and has fine performance for most workloads. There is a reason it is the default file system for professional users that care about reliability and performance.)

I posted a bit of a reasoning on why UFS2 is trash.
That statement is nonsense. Not only that, it is offensive nonsense. While it is true that UFS has, like XFS, not terribly much tuning and adaptation to modern IO hardware, it is still exceedingly well engineered (you see the care that was put into coding it, by professionals rather than amateurs), and has reasonable performance for a large fraction of workloads. From a reliability viewpoint, it does very well, with remarkably few data loss incidents if you stress it.

ZFS is unsuitable for many workloads, ...
I take umbrage to "many". It does well for mixed workstation and development workloads. Being log-structured, it has problems with certain update-in-place workloads, but those are today getting to be corner cases (relatively little storage is used for things like MySQL transaction processing).

... and is absolutely overkill, despite gnoma's protests.
That statement is also completely nonsensical. Of all file systems available for free, it is pretty much the only production-ready one that offers checksums. That is not overkill, it is today a vital part of getting reliability up to where it needs to be. The integration with RAID is also a highly desirable feature (which again improves real-world reliability, by reducing rebuild times), and the ease of administration when using interesting storage setups (like mirroring or RAID or snapshots or remote synchronization). In my (not humble) opinion, everyone who cares about data durability should be using checksums, and that makes ZFS the opposite of "overkill".

It's a memory hog in all but some webserver applications.
File systems use memory for caching and buffering. Fact of life. You want performance? You give your file system memory. With ZFS, that memory usage is visible much more explicitly, and can be tuned. You can run ZFS with very little memory (I only have 3 GiB on my system), and performance will be adequate, comparable to other file systems on low-memory hardware.

To be clear: I have nothing against porting XFS to FreeBSD, if someone wants to do it. A starting point would have to be checking the license situation; if XFS is today encumbered by the GPL or an even more restrictive license, this might be a non-starter. A native and well-maintained XFS implementation would solve the common problem of wanting to share a disk between Linux and FreeBSD. But let's not start that project by insulting the existing file systems, in particular with farcical arguments.
 
First off, ralph, I respectfully ask you stop acting at me with a patronizing tone, as if I myself don't understand these things intrinsically.

If it is so good, why isn't it the default on Linux?

It is, on several distros. Politics and the fact that ext is a traditional filesystem keep ext4 on most, but almost all of them offer xfs as an option, and the ones not offering it are not worth their salt.

RHEL, one of the most prestigious/gold standard GNU/Linux distros, uses it and RedHat heavily develops it now that SGI hasn't been a thing in 10+ years ( I don't count the successor company, Silicon Graphics International, as the same )

(Personal comment: I actually think that XFS is a pretty good file system, and I know some of the people who developed it personally, some are neighbors, some are colleagues. One of its great advantages is that it was developed by serious professional software engineers, not by fly-by-night scam artists / murderers like Hans Reiser, unpaid volunteers like early Linux file systems, or nut cases like some of the hip file systems such as Hammer or Butter. That particularly shows in the quality and readability of its source code and documentation. But other Linux file systems have had hundreds of person-years more tuning and development, while XFS has been mostly dormant in development for two decades, so it has fallen way behind. In particular, the ext series of file systems is today extremely good, very well understood and documented, and has fine performance for most workloads. There is a reason it is the default file system for professional users that care about reliability and performance.)

Hans Reiser was a brilliant mind troubled with mental health problems, problems that are all the more relevant in the modern day. I'm not sympathetic to murderers, don't misunderstand. But I can see a talented mind, and how Hans probably had underlying causes that led to him murdering his wife and ruining his kids lives. Nobody does that unless there's something deeply wrong with them mentally.

XFS is not dormant -- it's being actively supported and developed by RedHat. You're what, 5-6 years out of date? When did RHEL 7 come out, that's when it gained relevancy again. The only other relevant in-tree players are ext4, which is an abortion of a filesystem with a codebase worse than even UFS2, and btrfs, which after over a decade in development is still a pile of rubbish. Reiser3 (4 is not in kernel), ext3, JFS etc. are far lower on the development totem pole. What did I miss?

That statement is nonsense. Not only that, it is offensive nonsense. While it is true that UFS has, like XFS, not terribly much tuning and adaptation to modern IO hardware, it is still exceedingly well engineered (you see the care that was put into coding it, by professionals rather than amateurs), and has reasonable performance for a large fraction of workloads. From a reliability viewpoint, it does very well, with remarkably few data loss incidents if you stress it.

Anecdotes man. Is there a white paper for UFS like XFS has? Have you /looked/ at the codebase (if you're not a dev) and tried to understand both the ffs and ufs layers?

I take umbrage to "many". It does well for mixed workstation and development workloads. Being log-structured, it has problems with certain update-in-place workloads, but those are today getting to be corner cases (relatively little storage is used for things like MySQL transaction processing).

I can name many workloads for where ZFS is absolutely unnecessary/ill-suited:

DNS servers, which oftentimes can run on tiny (sub-128M) memory footprints, need fast network response and updates. ZFS doesn't do well in a tiny space like that, and UFS, while it can perform /okay/ the lack of sophistication in its design hamstrings it compared to XFS, in my testing.

Embedded web servers. I run a lot of ARM SBCs in NetBSD, for instance, and I'm basically running simple httpd instances. ZFS is overkill.

Archival servers (not the same as storage servers) because ZFS's featureset means more pool scrubbing and other I/O intensive tasks on a more regular basis. A typical archival server as I have it has two main functions: Redundancy for primary storage pool servers, and ultrium backups. Both of these I want a simple, reliable filesystem that isn't going to take up much I/O time, and also not act as an egg in the same basket -- my primary storage pool servers are 12.2 (won't be going to 13, I'll move them to illumos or something or even off ZFS rather than go to ZoL based trash for reasons I'm not gonna express here)

A simple travel laptop such as my X131e. I really really don't need ZFS on that.

Embedded applications, but these aren't really relevant here.

XFS is not, fwiw, a stellar leader in performance. It's a good all-around filesystem with extensive existing documentation, and unlike say btrfs, hammer2, or other non-ZFS CoW systems that are at least halfway mature, it has no processor interop issues. On btrfs, you can't load a volume if it was made on a different system with a different page size,for instance. ZFS was remarkably decent here, since it was designed for both x86 and SPARC, fwiw, but that's a nonsequitur. The other existing options are JFS, ext4, Reiser3 for decent, in-kernel filesystems that are mature. ext4 is unsuitable as it's less well-documented and heavily tied to the Linux kernel, so reimplementing it would be a bear. Reiser3 is slow and has issues with its tail-packing routines. It's basically abandoned. JFS is the dormant one, it's a port of the AIX filesystem JFS2. It's a great filesystem for what it is, but I would never put it in the same category as XFS. And it's less-well developed.

If you have a better idea of what to replace UFS2 with, by all means, the jury is in session.
That statement is also completely nonsensical. Of all file systems available for free, it is pretty much the only production-ready one that offers checksums. That is not overkill, it is today a vital part of getting reliability up to where it needs to be. The integration with RAID is also a highly desirable feature (which again improves real-world reliability, by reducing rebuild times), and the ease of administration when using interesting storage setups (like mirroring or RAID or snapshots or remote synchronization). In my (not humble) opinion, everyone who cares about data durability should be using checksums, and that makes ZFS the opposite of "overkill".

Not everyone needs checksums, and I don't know what universe you live in where your data flips bits on a regular basis. If it does, and it corrupts stuff, that's probably not the OS/filesystem's fault. I ran well before I even ran ZFS in production RAID10 fiberchannel array of XFS stuff. Never had a problem.

File systems use memory for caching and buffering. Fact of life. You want performance? You give your file system memory. With ZFS, that memory usage is visible much more explicitly, and can be tuned. You can run ZFS with very little memory (I only have 3 GiB on my system), and performance will be adequate, comparable to other file systems on low-memory hardware.

All filesystems can be tuned for memory usage. ZFS is still a pig, and wirth's law is on my side here. I can run ZFS on 512M sure, but 32M? 8? Nope. And yes, I have systems with that little of memory running UNIX-type OSes. They're not new, for sure, but wirth's law is a biggie here.

ZFS consistently uses more memory for normal operation on average than other FSes. If your response to my arguments and a case for something to replace or even just complement the selection we have for FreeBSD is "JUST USE ZFS" or "UFS IS FINE AND YOU CAN'T OFFER PROOF" I'm sorry, this is going to stall really quickly into a nasty deadlock.

ZFS is a decent filesystem, but I won't accept that kind of an answer. It's a cop-out.

To be clear: I have nothing against porting XFS to FreeBSD, if someone wants to do it. A starting point would have to be checking the license situation; if XFS is today encumbered by the GPL or an even more restrictive license, this might be a non-starter. A native and well-maintained XFS implementation would solve the common problem of wanting to share a disk between Linux and FreeBSD. But let's not start that project by insulting the existing file systems, in particular with farcical arguments.

XFS is only encumbered by GPL for the Linux kernel drivers and xfsprogs. This does not preclude a port of XFS to another OS using an entirely different license. This is legal.

From the docs out there, it's entirely possible for a small team of people to make an XFS driver for FreeBSD that is BSD-licensed and conforms to directory structure v2 (I think Linux is on v4 now, IRIX stopped at v2) -- restoring/adding v3/v4 support is not trivial but not impossible either by simply reverse engineering the data structure in ghidra or something, having someone document it, then pass that to the other members of the team. The same team, once they finish a kernel port, could build a BSD-licensed xfsprogs.

I'm not a copyright attorney, but there's nothing illegal about taking documentation and reimplementing something. It's done in the BSDs all the time with Linux-encumbered code.
 
If it is so good, why isn't it the default on Linux
No file system shrink support, so you can only expand XFS file systems on top of LVM but never shrink them. Just recently work for that has started. ext4 on the other hand is able to do that since ages.

Also ext4 is the direct descendant of default file systems before, ext2 and ext3, while XFS is a third party thing. And aside that some people still believe that XFS will eat data in case of an power outage compared to ext4, because they never understood delayed writing and the need for an UPS. It also had some other issues, but these were more exposed and discussed in the public since the ones ext4 had.

Also when having directories with lots of small files in it - e.g. some cache directories - ext4 actually performs way better than XFS.

On desktop workloads also ext4 and XFS are more or less alike with performance; XFS really shines when it comes to heavy uses and multithreading. But these are not typical workloads on a desktop.

Overall XFS is the file system of choice for RDBMSes under Linux, e.g. Postgres.
 
One must be nuts to use Btrfs under Linux for anything aside most basic stuff. Btrfs just sucks for most features, and probably always will.

There's a reason why a second native COW file system for Linux - bcachefs - is under development since 2015. And even Dave Chinner seems to like it so far from what he's seeing there.
 
Note: In the past i used jfs.
I could pull the power, and after an fsck always everything was fixed, meaning jfs was really a good filesystem. But it's no longer popular.
Currently i always use xfs for linux.
 
JFS was never popular under Linux, despite having a solid heritage coming from IBM. It never took off. Also still lacks features which are part of AIX, like defragmentation.
IMO, the issue of JFS on linux is lack of interest by the developers, because it shines in many scenarios on AIX world. I saw JFS2 surviving several horrors, not to mention years of uptime with an archive server (by years, I say more than 5 years) and when you do a fsck to check before updating, the file system is still clean without a single problem, this isn't possible with ext4 (ext4 is well known for his silent issues, but people choose to ignore it).
 
Dave Chinner, the XFS developer at Redhat, had in 2014 at Linux Conference Australia a talk called "Linux file systems: where did they come from?" Basically a historical talk about from where the file systems came, and their development in Linux kernel since the time they were officially considered stable.

So it has nice charts in it, showing the changes on a file system since its introduction into main line kernel. ext4 is very special, because although it's been considered stable since 2008, it's been in ongoing development since then contrary to other file systems, which means it's an unfinished thing (or was back then).

Of course he also talks XFS history at around 39:40 after Btrfs, and slaps Btrfs around before as well.

View: https://youtu.be/SMcVdZk7wV8?t=1954
 
Is there a white paper for UFS like XFS has?
If that's the only problem with it, I guess we can convince the Foundation to sponsor a white paper. Probably.

DNS servers, which oftentimes can run on tiny (sub-128M) memory footprints, need fast network response and updates. …, and UFS, while it can perform /okay/ the lack of sophistication in its design hamstrings it compared to XFS, in my testing.
What kind of sophistication? What makes you think native XFS would perform better? What if the bottleneck is in another layer? (How often a DNS server even writes anything on disk? It's not like it's a full ACID database.)

Also please don't switch to attacking ZFS when convenient, you are arguing UFS sucks, so stick to that.
 
If that's the only problem with it, I guess we can convince the Foundation to sponsor a white paper. Probably.

Unlikely. The code is ancient, and it's basically in maintenance mode at this point. All eyes are at ZFS, which is a cop-out.

What kind of sophistication? What makes you think native XFS would perform better? What if the bottleneck is in another layer? (How often a DNS server even writes anything on disk? It's not like it's a full ACID database.)

Also please don't switch to attacking ZFS when convenient, you are arguing UFS sucks, so stick to that.

I mostly use powerdns, BIND9 is ancient trash unfortunately. powerdns's reload and update of DNS is pretty robust, but it does involve a decent number of writes to a database or to zone files and logs regularly. UFS has no specific sophistication when it comes to disk caching, and I'm basically convinced that soft-updates don't actually provide an acceptable performance, and combining it with a journal band-aid just increases writes to the disk further.
 
XFS could be implemented by a team of 3-5 people in a year or less likely if they utilized existing docs. I would be able to help, but I have three issues:
Quote is from the other thread.

That would cost about a million bucks in the Bay Area. I think most other places are cheaper.
 
UFS "white paper" go find a copy of "The Design and Implementation of the FreeBSD Operating System" by Marshall Kirk McKusick and George Neville-Nell and look at Chapter 8.
 
My application is a shipborne computer. This system runs off the ship's batteries and is subject to
infrequent, random power failures, caused by emergency starts of the main engine.

For the past 15 years jfs has proven reliable in this application, with only one data loss during that entire period, of which 99.9% of the data was recovered.

To me, ZFS is overkill. The system has 4 drives, one of which is an SSD boot drive, the rest data spindles.

I found evidence that there is support for XFS.

What about XFS??

INDY
You can use sysutils/fusefs-lkl port to mount XFS read/write on FreeBSD.
 
I once had my freebsd-ufs-fsck running on an openbsd partition. The result was disaster for the openbsd partition. This incompatibility is , in my opinion a serious flaw by design.
PS : I think geom-journal gjournal also contains old code as i was able to use it to crash the kernel.
Note : UFS is very usefull on systems low on memory where ZFS is not an option.
 
Back
Top