Other XFS on BSD

Yeah, none of us are claiming XFS doesn't have any merits. It's rather that supporting multiple good enough filesystems is not necessarily a win for a project with severe resource constraints. The rest of this meltdown is just too nasty to comment.
 
I think UFS is a nightmare to maintain and lacks the performance necessary to compete, ...
... the trash that is UFS because UFS is an unreadable, unmaintainable nightmare from the 1970s.
Quoted for posterity.

You clearly didn't read the documentation I linked in the thread this originated from. It describes the structure of an IRIX-era XFS (which other than directory structure versions, is similar to the modern Linux XFS)
So you are proposing to take a 20-year old description of XFS data structures, and build a kernel filesystem based purely on that one ancient view, of static metadata structures, without inspecting the semantics of metadata updates? Sorry to say it, but the result would pretty much be guaranteed to work rather badly, even if one started with up-to-date documentation of today's on-disk structures. File systems are much more complex that just data structures. Transitions matter, semantics of data matters, and interactions between fields matter.
 
Anything can be unreadable and/or unmaintainable if you don't think through what you are doing. For example, take 1,000 files. Do you just throw all of the files in one spot with a generic number and expect to find a specific one? No, you give them a descriptive name and sort them into folders, potentially grouping those folders into other folders. The same of applies to UFS. It is a perfectly maintainable filesystem if you think it through on what you need to do; something that any admin does when they setup a system. For those that complain about having to manage the disk capacity; that part has been a normal task that has been done for several years. The other part to consider is where UFS is better used at... A easy and prime example, would be a RPi running a single service (like DNS/DHCP/etc...) For something that doesn't use much disk, you are more likely use have a single disk and UFS would be more than sufficient. Otherwise anything can be setup/designed to be efficient/inefficient as you want, it just depends on you taking the time to do it. The default settings are merely set for common/basic environment. You also need to compare the same/or as close to the same as possible on version; as using a benchmark that is 5+ years ago may have completely different results using more recent versions (as performance improvements/change may have addressed a possible issue that the old benchmark pointed out). Even then, as its been said, benchmarks are biased; so you should run your own benchmarks and see what the actual situation is.
 
  • Like
Reactions: mer
Anything can be unreadable and/or unmaintainable if you don't think through what you are doing. For example, take 1,000 files. Do you just throw all of the files in one spot with a generic number and expect to find a specific one? No, you give them a descriptive name and sort them into folders, potentially grouping those folders into other folders. The same of applies to UFS. It is a perfectly maintainable filesystem if you think it through on what you need to do; something that any admin does when they setup a system. For those that complain about having to manage the disk capacity; that part has been a normal task that has been done for several years. The other part to consider is where UFS is better used at... A easy and prime example, would be a RPi running a single service (like DNS/DHCP/etc...) For something that doesn't use much disk, you are more likely use have a single disk and UFS would be more than sufficient. Otherwise anything can be setup/designed to be efficient/inefficient as you want, it just depends on you taking the time to do it. The default settings are merely set for common/basic environment. You also need to compare the same/or as close to the same as possible on version; as using a benchmark that is 5+ years ago may have completely different results using more recent versions (as performance improvements/change may have addressed a possible issue that the old benchmark pointed out). Even then, as its been said, benchmarks are biased; so you should run your own benchmarks and see what the actual situation is.

For any software, there's always gonna be maintenance overhead. On the Linux side, even the driver for the floppy disk drives is still getting maintenance in 2021 :p
 
  • Like
Reactions: mer
Back
Top