UFS vs ZFS

Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.

What are your sources to support this statement? Show me where the meat is, Voltaire.
It's these kind of results that give me this impression: https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2

The results that I've seen on Phoronix over the years give me the impression that Linux used to be more than half slower than FreeBSD in ZFS. Phoronix sometimes does benchmarks that are very important, and sometimes less important benchmarks. What I've also seen frequently over the years is that FreeBSD with ZFS gets much higher IOPS than Linux with EXT4/F2FS in certain situations. Sometimes more than 5x higher IOPS in Fio.

You can easily test it yourself. Install FreeBSD on your hardware, run multiple tests in Fio, the most reliable benchmark tool. See what your IOPS are. And then see what Linux gets with EXT4 or F2FS. My impression is that there are important scenarios where FreeBSD gets much higher IOPS.
 
Now that's quite a daring statement given the fact that Linux and FreeBSD are using the same codebase nowadays, namely OpenZFS.

What are your sources to support this statement? Show me where the meat is, Voltaire.
In my previous post I already give a strange performance difference and you can directly compare with ZoL.
And I mention the weird differences in Fio, so I mean specifically these results:

Furthermore, you also have these relevant results where Linux ZoL is much slower in the best SQL database:

And PostgreSQL is not slow on FreeBSD + ZFS compared to Linux: https://redbyte.eu/en/blog/postgresql-benchmark-freebsd-centos-ubuntu-debian-opensuse/
 
It's these kind of results that give me this impression: https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2

The results that I've seen on Phoronix over the years give me the impression that Linux used to be more than half slower than FreeBSD in ZFS. Phoronix sometimes does benchmarks that are very important, and sometimes less important benchmarks. What I've also seen frequently over the years is that FreeBSD with ZFS gets much higher IOPS than Linux with EXT4/F2FS in certain situations. Sometimes more than 5x higher IOPS in Fio.

You can easily test it yourself. Install FreeBSD on your hardware, run multiple tests in Fio, the most reliable benchmark tool. See what your IOPS are. And then see what Linux gets with EXT4 or F2FS. My impression is that there are important scenarios where FreeBSD gets much higher IOPS.
You are comparing apples with onions here; as I stated FreeBSD is nowadays using the same ZFS codebase as Linux does. More specifically, that change happened with the release of FreeBSD 13.

But your benchmark is showing results of FreeBSD12, which used another ZFS implementation not used any longer.

So - this benchmark is pretty much useless to support your statement, because it just shows the performance profile of the past but not the present.
 
Zfs codebase being the same as on Linux. Ok, that can eliminate differences due to ZFS but can show differences in how ZFS interacts with the kernel. So, it's not invalid, one just has to be aware of what they are looking at.
In theory, OpenZFS-2.x on FreeBSD vs same on Linux the difference is the kernel interface. Features, checksumming, block management, that is all in the OpenZFS code itself. Kernel interfaces (allocate this, write that, read this) wind up being where the differences are; the differences may result in different optimizations being done in the OpenZFS code

FreeBSD-12, ZFS. To the best of my knowledge FreeBSD-12 is not EOL, so it's not really "the past". Comparisons of FreeBSD-12/NativeZFS vs Linux+ZoL indicates performance of the "system", not just ZFS. FreeBSD-12+Native ZFS vs FreeBSD-12+UFS indicates differences between ZFS and UFS. FreeBSD-12 also has the option of running with OpenZFS so one can do FreeBSD-12+NativeZFS vs FreeBSD-12+OpenZFS which will indicate differences between NativeZFS and OpenZFS and could provide data to prove/disprove "OpenZFS is faster/slower than NativeZFS on FreeBSD-12", but nothing more than that. I don't know if there is a "NativeZFS on FreeBSD-13" port (like the OpenZFS for FreeBSD-12), but if so that would provide data as to "OpenZFS is faster/slower than NativeZFS on FreeBSD-13".

My opinion is that the links posted of benchmarking are not useless, but one has to understand what is being looked at, just like every single benchmark ever done.

More "my opinion":
I can understand a reluctance to OpenZFS on FreeBSD-13, simply because OpenZFS-2.0 is effectively ZFS on Linux. That can lead to a caution in adopting FreeBSD-13.x, but that is what testing and waiting is for. If one is dead set against OpenZFS, then by definition, they are stuck on FreeBSD-12.x until 12.x is EOL. My systems, my rules, my choices. Your systems, your rules, your choices.

NOTE:
All the above is my opinion, based on my experience, feel free to agree, disagree, tell me I'm off my rocker, tell me to shut up and keep my opinions to myself, it's all good.
ETA:
Sorry for writing too many words.
 
Question regarding UFS.
Are there use-cases to not do journaling.
Are there use-cases to not do soft-updates
Journaling basically speeds up cleaning of dirty filesystems. I think there is also something about you can't do journaling if you are doing snapshots (this is going by memory way in the attic, think of it as swap space, so may not be correct).

My understanding of softupdates is that it's similar to ZFS and the transaction groups where writes get grouped so that the data on the disk (data and metadata) is consistent. You can still lose data if power is lost/hard pulled at the "right" time, but the disk will be consistent.

Not using softupdates? Perhaps something like a database wants to do synch mounts so the data is actually on the disk.
 
Question regarding UFS.
Are there use-cases to not do journaling.
Are there use-cases to not do soft-updates
It is the "combo"
For use-cases for NOT (journaling && soft-updates) something here

As ZFS provides cheap snapshots, that is the filesystem of preference for folks that want snapshot functionality. The only remaining use for snapshots in UFS is the ability to do live dumps. Thus I have not been motivated to go to the effort to migrate the kernel code to fsck (and nobody has offered to pay me the $25K to have me do it).

I am not sure for newer version of FreeBSD

PS. McKusick, for those who do not know him, is the "father" of UFS
#define FS_UFS1_MAGIC 0x011954 /* UFS1 fast file system magic number */
#define FS_UFS2_MAGIC 0x19540119 /* UFS2 fast file system magic number */
Yes, his birthdate :)
So I think it's really reliable
 
Personally i think snapshot functionality for ufs is not that useful.
In fact for me they can remove the snapshot code.
It would make the filesystem a bit simpler.
 
Personally i think snapshot functionality for ufs is not that useful.
In fact for me they can remove the snapshot code.
It would make the filesystem a bit simpler.
According to dump(8):
-L This option is to notify dump that it is dumping a live file
system. To obtain a consistent dump image, dump takes a snapshot
of the file system in the .snap directory in the root of the file
system being dumped and then does a dump of the snapshot.
 
What to use depends on more than just features and speed. It depends on your disk configuration, UFS is as I see it for non raid configurations. If you need raid or other of the features of ZFS use it.
It has also to do with stablilty. UFS is getting almost no new features, bugs are getting fixed and its limitations are quite well known. ZFS on the other hand is in quite active development, so the risk of some day to be hit by serious bugs are much bigger, although it until now seems to be the case.
 
Is your argument that zfs in FreeBSD 13 is slower than in FreeBSD 12.x?
What I am saying is that it makes no sense when benchmarking ZFS any longer to use FreeBSD 12.X against some Linux, because FreeBSD 12.X is the past and therefore obsolete ZFS implementation of FreeBSD. The present ZFS implementation in FreeBSD sinve 13.0 is OpenZFS; and probably will be for quite a long future.

When backing up bold statements like "ZFS on FreeBSD is 2x faster than on Linux" it nowadays makes only sense to compare FreeBSD >= 13.0 with the Linux distribution of choice then, obviously.

My personal expectation is that there are some slight speed variances, depending on hardware and used benchmarks. Sometimes probably Linux is quicker, sometimes FreeBSD. But FreeBSD being 2x faster would mean that on Linux is something fundamentally broken, which would really hard to believe. It would also mean that in OpenZFS there would be something probably fundamentally broken, which I disregard as possibility because I am pretty sure that otherwise the FreeBSD developers would not have bothered with switching over OpenZFS.

There was a talk by Michael Dexter at vBSDcon back in 2019, who compared the ZFS implementations of that time against each other using benchmarks. It's also listed on papers.freebsd.org. This is way more in line with what I would expect from something like that. Phoronix does lots of benchmarks, but not always in a sensible way.

View: https://www.youtube.com/watch?v=HrUvjocWopI
 
Oh, so ZFS in FreeBSD 12 will not just see security and bugfixes until EoL, despite the big change between FreeBSD 12.0 and 13.0? I sincerely doubt that.
 
I don't see a difference, other than the fact that mounting UFS drives are much more easier than ZFS. So with my limited knowledge, UFS seems the way to go.
I agree. I just tried to reinstall and give ZFS a try but ran into the same issues I had over a year ago, ugh.

The only difference I notice with ZFS is that it consumes more ram, and I have to figure out how to properly mount my hard disk because the installer doesn't import it. To me the effort of reading pages upon pages of documentation just to take a shot at understanding how a different filesystem works doesn't outweigh the simplicity of UFS. I tried ZFS again for Poudriere, but its still not my cup of tea.
 
Some zfs is explained in books,
Ones you take regular snapshots you master it.

Zfs normally takes only memory which is free. So it's not a problem except for embedded/small devices.
You can also tune in sysctl.conf:
Code:
vfs.zfs.arc_min= 1500000000              #0
vfs.zfs.arc_max= 2500000000              #0

zfs has a bit of a learning curve. But I did not use the installer to install my freebsd on zfs. I just used commandline commands.
 
From my experience, mounting with ZFS isn't difficult, and if anything; easier than on Linux using various fs's (including btrf). Even on one of my systems, where I encountered a race condition on ZFS (the fault for the race condition is my own fault); it still isn't difficult. I will admit, I carried one thing from linux over (not sure if it is needed or not, but it doesn't hurt having it); that is to tell the kernel on startup what the root device/dataset (the base / drive/dataset, not /boot or /root) is (you only need specify one device, zfs is smart enough to see and load the other devices for the pool).

I will say, if you use multiple pools, the mounting order of which pool is mounted first may not always be in the order you expect and you may be better off telling one pool not to mount automatically.
 
zfs on root is a problem for linux. I use zfs on linux but only for my data. Most problems are related to kernel updates.
zfs on root on freebsd works out of the box. It works flawless. And is better than any other filesystem jfs,xfs,etc...
 
  • Like
Reactions: mer
Some zfs is explained in books,
Ones you take regular snapshots you master it.

Zfs normally takes only memory which is free. So it's not a problem except for embedded/small devices.
You can also tune in sysctl.conf:
Code:
vfs.zfs.arc_min= 1500000000              #0
vfs.zfs.arc_max= 2500000000              #0

zfs has a bit of a learning curve. But I did not use the installer to install my freebsd on zfs. I just used commandline commands.
You can do at runtime too

sysctl vfs.zfs.arc_meta_limit=whatever
sysctl vfs.zfs.arc_max=youlike

But, in fact, zfs has no learning curve
You don't need to make any particular adjustments or I don't know what
It works quietly

When you enter situations such as "how to optimize the use of RAM for virtualbox servers" you are much, much, much beyond the normal user

I just tried to reinstall and give ZFS a try but ran into the same issues
??? issues ???
The "strangest" may be make a "stripe-0" for a single-disk zpool
There is nothing to choose or change or fix for zfs, just "next-next-next-OK"

Sure you can, for example, put /home not inside zroot and so on.
But 99% of users don't need it, and 1% know what it's doing

You don't have to import (zpool) anything, you don't have to edit fstab, you do not need to set mountpoints etc etc
There are no "magical configurations" that will make it run 10x faster than the default settings

The "real curve" is in the zfs create and to set / get the compression setting, even atime is no more so critical (with non-spinning drives)
 
I will say, if you use multiple pools, the mounting order of which pool is mounted first may not always be in the order you expect and you may be better off telling one pool not to mount automatically.
I find this curious/interesting: I've never cared about what order pools are mounted in, nor have I cared what order normal partitions/UFS filesystems mounted. Pools are separate from each other, datasets are separate, do you have an example of pools mounting in the wrong order caused you a problem?
 
To explain my setup that I made on that system, is 2 pools, one being zroot with the primary system installed. The second pool (I think I named it as zdata, that system is offline so I can't check to be sure); anyways, I moved /var/lib, /var/db, /usr/home and other larger directories to zdata (which has a much larger space available). The problem came when I rebooted, in that zdata's datasets got mounted first, then zroot's datasets were mounted afterwards, mounting over. So when the system started up, the contents of all of the zdata's were empty, even though they were still there, but not available to access. Once I set zdata to legacy/non mounting and mount it through fstab, it corrected the issue.

I admit, that was my first time setting up zfs; so it ended up serving as a lesson of remembering the KISS rule. I figure when ever I get around replacing those drives; I'll just get rid of both pools and recreate it with a single pool.
 
Ahh ok. Root on ZFS has a couple of specific requirements to support Boot Environments. That means some directories related to /var , /usr /usr/local want to be part of the zroot data set. The best thing I can recommend is do a default install (you can do it in a vm) and then zfs list to see what winds up in it's own dataset (zpool history also is good). Moving /usr/home is usually never a problem, /var things like /var/db and /var/lib should be part of the root dataset.
This is from a basic root on ZFs install. notice which subdirectories from /var are listed. They have their own datasets. Everything else under /var is part of root dataset.
zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 28.2G 195G 88K /zroot zroot/ROOT 7.98G 195G 88K none zroot/ROOT/13.1-RELEASE-p0 8K 195G 7.45G / zroot/ROOT/13.1-RELEASE-p2 7.98G 195G 7.34G / zroot/tmp 2.19M 195G 2.19M /tmp zroot/usr 20.1G 195G 88K /usr zroot/usr/home 19.1G 195G 19.1G /usr/home zroot/usr/ports 988M 195G 988M /usr/ports zroot/usr/src 88K 195G 88K /usr/src zroot/var 1.93M 195G 88K /var zroot/var/audit 88K 195G 88K /var/audit zroot/var/crash 88K 195G 88K /var/crash zroot/var/log 1.45M 195G 1.45M /var/log zroot/var/mail 112K 195G 112K /var/mail zroot/var/tmp 120K 195G 120K /var/tmp
 
13.1-RELEASE
find /usr/src -type f -iname "*ufs*" --> 52 files
find /usr/src -type f -name "*ufs*" -exec du -ch {} + | grep total$ --> 684 kb

find /usr/src -type f -iname "*ffs*" --> 86 files
find /usr/src -type f -name "*ffs*" -exec du -ch {} + | grep total$ --> 1.5 mb

find /usr/src -type f -iname "*zfs*" --> 1,065 files
find /usr/src -type f -name "*zfs*" -exec du -ch {} + | grep total$ --> 15 mb
 
You can also install version 2.1 in FreeBSD 12 and do a performance comparison. There are few or no benchmarks to be found that make a comparison. All I can say is that it seems that version 2.1 is going to be faster than the version FreeBSD 12 uses by default:

OpenZFS 2.1 performance and reliability improvements

Improved zfs receive performance with lightweight write: This change improves performance when receiving streams with small compressed block sizes.

Distributed RAID (dRAID) is an entirely new vdev topology we first encountered in a presentation at the 2016 OpenZFS Dev Summit.
In the chart at the top of this section, we can see that, in a pool of ninety 16TB disks, resilvering onto a traditional, fixed spare takes roughly 30 hours no matter how we've configured the dRAID vdev—but resilvering onto distributed spare capacity can take as little as one hour. The fast resilvering is fantastic—but draid takes a hit in both compression levels and some performance scenarios due to its necessarily fixed-length stripes.


Alexander Motin and other OpenZFS developers are currently working on various micro-optimizations in areas like atomics, counters, scalability and memory usage. Release candidate testers already report improved performance compared to OpenZFS 2.0 and previous releases.
Voltaire: summarizing your posts, we could conclude that you think ZFS is the best of all possible worlds. Sorry :cool:
 
Back
Top