Anything can be unreadable and/or unmaintainable if you don't think through what you are doing. For example, take 1,000 files. Do you just throw all of the files in one spot with a generic number and expect to find a specific one? No, you give them a descriptive name and sort them into folders, potentially grouping those folders into other folders. The same of applies to UFS. It is a perfectly maintainable filesystem if you think it through on what you need to do; something that any admin does when they setup a system. For those that complain about having to manage the disk capacity; that part has been a normal task that has been done for several years. The other part to consider is where UFS is better used at... A easy and prime example, would be a RPi running a single service (like DNS/DHCP/etc...) For something that doesn't use much disk, you are more likely use have a single disk and UFS would be more than sufficient. Otherwise anything can be setup/designed to be efficient/inefficient as you want, it just depends on you taking the time to do it. The default settings are merely set for common/basic environment. You also need to compare the same/or as close to the same as possible on version; as using a benchmark that is 5+ years ago may have completely different results using more recent versions (as performance improvements/change may have addressed a possible issue that the old benchmark pointed out). Even then, as its been said, benchmarks are biased; so you should run your own benchmarks and see what the actual situation is.