UFS vs ZFS

What would be the benefit of ZFS over UFS on the FreeBSD Desktop system?
I recently chose to go with UFS over ZFS for my FreeBSD installation. I don't see a difference, other than the fact that mounting UFS drives are much more easier than ZFS. So with my limited knowledge, UFS seems the way to go.
 
Yes, although that was nearly 12 years ago. Has really nothing relevant to the question changed in the meantime?
Not really. Both filesystems are fairly stable.

Personally I use UFS on my laptops. ZFS is great but on a laptop I rarely make use of its features and they are also fairly memory constrained. ZFS does take up more memory.
 
ZFS does take up more memory
I had better experience with UFS and maybe that's because why. My system is powerful but I felt a relatively noticeable difference between UFS and ZFS. I believe UFS was faster and more stable for me.
 
Yes, although that was nearly 12 years ago. Has really nothing relevant to the question changed in the meantime?
I agree with kpedersen on this; fundamentally nothing has changed. Bugs have been fixed, features have been added.

About the biggest thing I use ZFS for is Boot Environments. I have a system, Intel NUC so while it's not a laptop, it is limited in upgradability, RAM size and disk size. Nothing fancy but ZFS lets me upgrade with less trepidation. Another system is a more typical desktop tower, more memory, power, space, so I use it more for storage. So it gets mirrors.

Under certain workloads, I can believe that UFS would be faster than ZFS. I think the stability is a wash between the two (at least I've not noticed a difference).
 
I agree with mer.

ZFS is much more complex, and has some significant downsides. e.g. it's resource hungry, and pools need to be kept less than 80% full. However the really important features of ZFS for me are:
  • boot environments allow you to upgrade the operating system without fear of trashing your system; and
  • ZFS file systems share a common pool of unused disk space, within a pool.
To address the upgrade risk with UFS, I used to keep dual root file systems, and switch from one to the other with each upgrade. No need for that any more with ZFS.

Having said that, all my FreeBSD systems, even the virtual ones, have adequate CPU, disk, and memory resources. If things were very tight, I would consider UFS (which has stood the test of time).
 
I just found this thread because I've been investigating the tradeoffs. I'm trying to figure out whether I have a use case for ZFS. All the systems I use are virtual. That is, I'm running VMs inside Xen or I'm running EC2 instances on AWS. I have hardware RAID for physical storage, which Xen sees as one big disk. Xen creates virtual disks. If I want to add space, I do it at a layer below the operating system (e.g. resize the virtual disk/EBS volume). When I read about ZFS on the FreeBSD ZFS page, the early discussion talks about giving the filesystem knowledge of the underlying volumes. But in all my cases, everything is virtualised. So there are no physical hard disks or physical volumes for the OS to know about.

Today, I'm just running UFS everywhere because I've been doing BSD since 1993. If it ain't broke... But I'm trying to determine the advantages ZFS has to offer, if you take away the whole physical/logical connection. Snapshots sound pretty awesome, but I'm not fluent in how they work. I tend to do disaster-level backups by backing up the whole VM/volume (outside the OS). But ZFS sounds like it could enable some "oops" restorations really easily (a capability I don't really have right now). There's some discussion of performance (e.g., "MySQL runs faster"), so should I be putting my MariaDB on a ZFS volume, even though the underlying disk is virtual?

The 12-year-old original thread doesn't contain practical considerations that someone might use to judge their own workload and decide which suits them better. It's a lot of gut hunches, personal preferences, and educated guesses.
 
If not knowing "UFS or ZFS?" choice always should be freebsd-ufs.
This shall not feel as no "reduction" nor "downgrade", 'cause it's not!
UFS is a mature, fast, powerful, stable, reliable, while (very) easy to use fs.
Unsophisticated, straightforward, and fully capable of fulfilling way more than enough storage needs.

Since ufs provides more space per partition, you'd wasting capacity by using zfs on a single drive/partition.

Simply summerized:
ZFS
is to assemble partitions (drives) to storage pools,
having large(r) (and growable) and/or redundant storage(s).
There are additional benefits.
But most don't make sense or even work, such as raid, on a single drive or virtual drives within a vm.

If there are single drives only (Laptop, or single drive desktop machine)
or you are within a VM, where it all depends on the underlying native OS's fs anyway,
I don't see the point in using zfs.
Except you're well versed in zfs and know exactly what you're doing and why.

Even zfs is no rocketscience though it's a bit more complicated to install and use than any other fs.
And it uses a bit more resources (CPU, RAM).
So without using more than one drive natively
I don't see no advantage of using it at all.
You only may lose power (speed) and have all the effort, only.
 
Today, I'm just running UFS everywhere because I've been doing BSD since 1993. If it ain't broke... But I'm trying to determine the advantages ZFS has to offer, if you take away the whole physical/logical connection. Snapshots sound pretty awesome, but I'm not fluent in how they work. I tend to do disaster-level backups by backing up the whole VM/volume (outside the OS). But ZFS sounds like it could enable some "oops" restorations really easily (a capability I don't really have right now). There's some discussion of performance (e.g., "MySQL runs faster"), so should I be putting my MariaDB on a ZFS volume, even though the underlying disk is virtual?

The 12-year-old original thread doesn't contain practical considerations that someone might use to judge their own workload and decide which suits them better. It's a lot of gut hunches, personal preferences, and educated guesses.
ZFS main advantages are: subvolumes with snapshots, portability of storage pools due to independence from hardware raid controllers, zfs send for fast backups, bitrot detection and self healing of these if at least the data is there two times, really high redundancy if you want to set it up that way.
 
Apropos to try out system upgrades in a ZFS boot environments first, on UFS2 a similar environment can be created with the gunion(8) control utility (new on 14.0-CURRENT).

The major difference to ZFS BE's is, to work with gunion(8), a extra physical media or a md(4) disk in size of at least the original disk is required.


I personally use ZFS on all my (one disk desktop) systems because of, to list a few,
  • ZFS boot environments
  • ZFS snapshots, to backup and easy rollback the file system to a certain state.
  • individual ZFS dataset properties
  • on a laptop easy automated geli encrypte Root-on-ZFS installation from the official installer.
Also, I don't experience any performance difference between the two file systems, running mostly desktop applications.
 
Even though I do not use ZFS at its best (2 ssd in striped mode) the "boot environments" saved my life several times already...
 
  • Like
Reactions: mer
subvolumes with snapshots,
Couple this with clones and you can have some fun with "development and production" datasets for things like websites and databases
portability of storage pools due to independence from hardware raid controllers,
Good if you are moving devices around machines. I would say the independence from hardware raid controllers is actually the more important part. I've been bitten in the past where a hw raid controller has failed and the only solution is reinstall from scratch because even if you replaced with the "same" controller, it would not work. Lots of motherboards come with RAID capability built in; if that fails, what do you do, get a new mobo?
zfs send for fast backups,
snapshots and zfs send/receive is a good basis for doing backups.
 
Comparing the ZFS paradigm with other filesystems and volume managers, ZFS is like a filesystem and volume manager wrapped up in one. Typically in FreeBSD a person would need to use vinum or the newer gvinum (both of which are "clones" of Veritas Volume Manager -- VxVM) or a combination of gmirror and gconcat.Then put UFS into logical volumes or logical devices.

In Linux people do this using LVM (which is a clone of HP-UX LVM) into which they put EXT, EXT2, EXT3, EXT4, or XFS filesystems into logical volumes.

Or in Solaris, before ZFS, one would build a Solstice DiskSuite volume (a rudimentary thing like our gmirror), putting UFS into the logical device.

ZFS combines the function of volume manager and filesystem into one. Instead of at least 4+ commands to set up an EXT4 volume in Linux or 2+ commands to do the same with gmirror+UFS, once you've done the zpool create, setting up new filesystems (they're actually called datasets) is one simple zfs create command. ZFS simplifies management of your storage farm.

On the flip side, yes, UFS is much simpler and uses much less memory. In a memory constrained environment you're better off with UFS. However I did use ZFS on a 768 MB heavily tuned i386 laptop for a long time. This is not recommended unless you're willing to fiddle around just to get it right.

You need to compare them feature by feature, like ZFS compression or UFS simplicity and small footprint to determine which is better for your application. One size does not fit all.
 
  • Like
Reactions: mer
I'm surprised that you find UFS faster. I did tests with both a few years ago and ZFS was clearly faster for me.
That was on HDDs, basic single-disk setups. I didn't compare the speed on a SSD.

I always use ZFS. It doesn't need multiple disks and complex setups to be useful.

I especially like:

* checksumming and zpool-scrub(8)
Instead of silent data corruption, you can know if a file has been damaged and needs to be restored from backup.

* different datasets without the need to partition the disk
That's way more flexible than disk-level partitioning. You can set, or not, the minimal and maximal size of any dataset, and adjust these values anytime. You can delete a dataset and all the space it used to take is instantly available to the others again. Datasets can be mounted with different options just like regular partitions, while still residing on the same physical partition of the disk.

* copies=2 (or 3)
Not something I would enable everywhere, but really handy to automatically keep two or three copies of each file, for important data on dedicated datasets or backup disks.

mounting UFS drives are much more easier than ZFS
It's not that complicated either, and in normal daily use you shouldn't be mounting your root disk manually too often anyway :)
 
I had better experience with UFS and maybe that's because why. My system is powerful but I felt a relatively noticeable difference between UFS and ZFS. I believe UFS was faster and more stable for me.
I'm surprised that you find UFS faster. I did tests with both a few years ago and ZFS was clearly faster for me.
That was on HDDs, basic single-disk setups. I didn't compare the speed on a SSD.
It depends on the situation (specific hardware/software/task).
Sometimes ZFS is faster, sometimes UFS is faster.
For specific tasks either of these is clearly faster.


ZFS wins:

UFS wins:

My guess is that ZFS is currently faster than UFS on average, and this was the case a long time ago: https://blogs.oracle.com/solaris/post/zfs-to-ufs-performance-comparison-on-day-1
Looking ahead to our results we find that of our 12 Filesystem Unit test that were successfully run:
  • ZFS outpaces UFS in 6 tests by a mean factor of 3.4
  • UFS outpaces ZFS in 4 tests by a mean factor of 3.0
  • ZFS equals UFS in 2 tests.
ZFS on Linux is slow.

But on FreeBSD ZFS is fast (frequently faster than EXT4/Btrfs are on Linux).

The thing that ZFS needs a lot of RAM is also a myth by the way. I've been using ZFS for 4 years on a system that has 4GB of system RAM, I've never had any problems. When I open 200 tabs in Chromium the browser becomes slow and some tabs crash. But if I close +- thirty tabs again, Chromium becomes responsive again. I've never had any data loss or anything like that. I haven't had any stability issues in games either, so I'd say ZFS works fine if you only have 4GB of system RAM available.

In terms of reliability and stability, I would say that ZFS is going to beat UFS.

ZFS vs UFS and power loss https://forum.netgate.com/topic/120393/zfs-vs-ufs-and-power-loss
Yes, it is about zillion times better than UFS. Switching to ZFS should be a complete no brainer with anything that has 4GB of RAM or better. I'd still go for it even with 2GB boxes, had nothing but pain with UFS for years. Garbage filesystem.

UFS is still quite resilient to actual data corruption but it often requires a manual fsck after a power loss to fix the filesystem metadata (not the actual stored data but the filesystem bookkeeping information). It's actually better without the journaling as mentioned, keep soft-updates on though for reasonable performance. The downside is that manual fsck can take a long time but it will fix the filesystem unless something is completely corrupted or there is an actual hardware fault on the disk.
ZFS is miles ahead in this department though, I have never experienced any power loss related problems with ZFS.


ZFS also offers more features than UFS. ZFS has important features and characteristics that even Btrfs doesn't offer.
 
Voltaire good stuff. I haven't done any looking but have you run across any benchmarking of ZFS on FreeBSD that compare "FreeBSD Native ZFS" (the version in 12.x) vs OpenZFS on FreeBSD (version in 13.x)? From a pure user experience I've seen no difference in my daily use which is not really disk intensive.

RAM usage: I think there is a lot that may depend on specific workload (I know, amazing). The Number of tabs open in Chromium, I wonder if it's more Chromium vs the filesystem.

Regardless, thanks for posting the info.
 
Voltaire I haven't done any looking but have you run across any benchmarking of ZFS on FreeBSD that compare "FreeBSD Native ZFS" (the version in 12.x) vs OpenZFS on FreeBSD (version in 13.x)?
You can also install version 2.1 in FreeBSD 12 and do a performance comparison. There are few or no benchmarks to be found that make a comparison. All I can say is that it seems that version 2.1 is going to be faster than the version FreeBSD 12 uses by default:

OpenZFS 2.1 performance and reliability improvements

Improved zfs receive performance with lightweight write: This change improves performance when receiving streams with small compressed block sizes.

Distributed RAID (dRAID) is an entirely new vdev topology we first encountered in a presentation at the 2016 OpenZFS Dev Summit.
In the chart at the top of this section, we can see that, in a pool of ninety 16TB disks, resilvering onto a traditional, fixed spare takes roughly 30 hours no matter how we've configured the dRAID vdev—but resilvering onto distributed spare capacity can take as little as one hour. The fast resilvering is fantastic—but draid takes a hit in both compression levels and some performance scenarios due to its necessarily fixed-length stripes.


Alexander Motin and other OpenZFS developers are currently working on various micro-optimizations in areas like atomics, counters, scalability and memory usage. Release candidate testers already report improved performance compared to OpenZFS 2.0 and previous releases.
 
  • Like
Reactions: mer
That last one, on micro-optimizations, people often say "don't do that" but sometimes if they are part of the 80% of the code getting run, they add up.
 
ZFS on Linux is slow.

But on FreeBSD ZFS is fast (frequently faster than EXT4/Btrfs are on Linux)
Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.

What are your sources to support this statement? Show me where the meat is, Voltaire.
 
Today, I'm just running UFS everywhere because I've been doing BSD since 1993. If it ain't broke... But I'm trying to determine the advantages ZFS has to offer, if you take away the whole physical/logical connection. Snapshots sound pretty awesome, but I'm not fluent in how they work. I tend to do disaster-level backups by backing up the whole VM/volume (outside the OS).
The 12-year-old original thread doesn't contain practical considerations that someone might use to judge their own workload and decide which suits them better. It's a lot of gut hunches, personal preferences, and educated guesses.
OK, here I am
I manage (about) 100 FreeBSD servers, physical & virtual around the World, UNIX user from (about) 30 years (Solaris in fact)
Datastorage, MariaDB, sphinx-server, nginx, whatever

Short version in virtual-world (off topic in respect a What would be the benefit of ZFS over UFS on the FreeBSD Desktop system?) but on maybe 500 FreeBSD machines (in the years, of course) I think to have installed X maybe... 2 times. So I have almost zero experience with FreeBSD clients

Snapshots: enormously faster than those of hypervisors. Normally less than a second even on large and busy machines.
Allows you to make backups of the .vmdk directly from zfs snapshots.
Unmatched against vSphere, VBOX etc snapshots (... on par... with... VmWare Workstation!)

No chkdsk / scandisk / whatever (if you DO NOT use deduplication)
This reason alone is enough to abandon filesystems that require it instead

Scrub (data integrity check).
Especially in the virtual field it makes the difference between "maybe" the data of a broken machine has been copied well, with "sure they work"

Resilvering. The whole system is basically a gigantic "RAID controller", with 8 or 16 CPUs and maybe 128GB or more of RAM.
Nothing to do with failing HW RAID systems

Compression, very good and very fast (LZ4)

Reasonably fast (considering everything it does). Not a big deal with today's machines

Mirroring of NVMe drives without the slightest problem, out-of-the-box
Even this alone is enough to abandon SATA / SAS HW RAID controllers

Plus a whole host of other things.

Final judgment: it "pays" to use FreeBSD not because it is the "best" operating system
But why is it the best operating system to run zfs (and can become a samba-PDC-master, for very common Windows clients)

Note: I am referring to version FreeBSD 11-12 zfs
Defects and solutions are known (which is essential when you cannot go on site)

13.x (with the new OpenZFS) does not convince me at all.
Too many youth problems, too many oddities for production use

I expect that, of course, the situation will improve over time
But today I refuse to use a BSD 13.x in production
 
A word about benchmarks

They say almost nothing (in production), they are almost useless.
Having a system that runs 3.6% faster, but you can't test if it is working fine, is the difference between a hobbistic and a professional use.

Of course, not everyone scrubs their laptop every day, I understand that

But if we consider the "server world", and not the desktop world, there is not the slightest doubt (ufs vs zfs, or better zfs vs everything else)

There is some doubt with Solari/OmniOS/Nexenta (for a domain client fileserver with Solaris' ACLs), and even with Debian+ZFS (if you want to use the OpenZFS version it doesn't change much and you need Linux software)

These are my opinions, which however are based on facts, on evidence and decades of experience
 
I could also add more interesting things about verifying backups with minimal tear of media takers, choosing expendable ones and activating deduplication on them (in this case yes, there is a sort of "fsck" with zfs, but on temporary drive you can tear off and remake)

I don't know if it matters to someone, I have to be careful, otherwise it seems that I can "suck" credit cards from remote :)
 
Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.

What are your sources to support this statement? Show me where the meat is, Voltaire.
There was a grain of truth.
On Linux, historically, zfs worked with FUSE, so there was some greater overhead
It is actually modest
However, as mentioned, the benchmarks on filesystems with different functions are just about of little significance
FAT32 is much, much, much faster than both zfs and ufs
Because it is much simpler, it does practically nothing
Of course, however, I prefer ufs on FAT32
 
Back
Top