Other ZFS and UFS difference

And for comparison with UFS: This is a solid "classic" filesystem. It does provide journaling, to protect data in case of crashes, power outages and the like. Very roughly speaking, it's comparable to Linux' ext4. It does NOT provide builtin RAID, checksumming, datasets, virtual volumes, snapshots, clones and all the stuff ZFS can do.

So, when to still prefer UFS? IMHO two possible reasons:
  • You don't have the RAM needed for ZFS' ARC to work well. A rule of thumb for a recommended minimum I've often seen is 1GB per TB of storage.
  • You have a special workload that performs much better on UFS. This should be pretty rare, but might happen.
 
It are two different filesystems like apples and pears.

From the wikipage of ZFS,
Copy-on-write transactional model
ZFS uses a copy-on-write transactional object model. All block pointers within the filesystem contain a 256-bit checksum or 256-bit hash (currently a choice between Fletcher-2, Fletcher-4, or SHA-256)[52] of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL (intent log) write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see Merkle signature scheme).

From Solaris on UFS,
UFS On-Disk Format
UFS is built around the concept of a disk's geometry, which is described as the number of sectors in a track, the location of the head, and the number of tracks. UFS uses a hybrid block allocation strategy that allocates full blocks or smaller parts of the block called fragments. A block is a set of contigous fragments starting on a particular boundary. This boundary is determined by the size of a fragment and the number of fragments that constitute a block. For example, fragment 32 and block 32 both relate to the same physical location on disk. Although the next fragment on disk is 33 followed by 34, 35, 36, 37 and so on, the next block is at 40, which begins on fragment 40. This is true in the case of 8-Kbyte block size and 1-Kbyte fragment size, where 8 fragments constitutes a file system block.
On-Disk UFS Inodes
In UFS, all information pertaining to a file is stored in a special file index node called the inode (except for the name of the file, which is stored in the directory). There are two types of inodes: in-core and on-disk. The on-disk inodes, as the name implies, reside on disk, whereas the in-core inode is created only when a particular file is opened for reading or writing.
The on-disk inode is represented by struct icommon. It occupies exactly 128 bytes on disk and can also be found embedded in the in-core inode structure
 
The main difference is that with zfs you can know if your data is corrupt or not.

Short version: if you have 4GB or more go with zfs.
It's like comparing a baseball bat (UFS) to an ICBM missile (zfs)
 
A load of theory and IMO nothing really usefull.

The main difference between ZFS and UFS is that ZFS allows you to use several virtual partitions (the official term being datasets) which can provide often needed separation between different sections, but unlike with UFS you never risk wasting diskspace because from a physical perspective you're still using 1 main filesystem (the so called "pool") for your system.

Example: It's common practice to separate /var from the main system so that you don't risk logfile or database corruptions whenever a dumb user tries to fill up your system (this is also why you'd normally separate /home but...).

So what would happen if you dedicated 5Gb for /home, 2Gb for /var and after 1 month of usage you suddenly realize that your users only gobble up 2Gb tops whereas some of your system databases expand quite heavily?

There's not much you can do on UFS in such a situation, not without taking down the system and trying to change the whole partition table. On ZFS this wouldn't matter because although /home and /var are separate they'd still use the same ZFS pool. So you don't risk wasting any storage space.

This is IMO the main advantage why it makes sense to use ZFS instead of UFS. Everything else is icing on the cake.
 
Following is all my opinion, based on my own experience with FreeBSD, UFS and ZFS. Feel free to agree or disagree.

Another item on the practical side of the ledger is with ZFS you get Boot Environments (BEs). If you've ever stuffed up a system upgrade, ZFS BEs show you how system upgrades are meant to be.
simply reboot the system, stop at boot loader, select the previous BE, continue booting, and you are back where you were before starting the upgrade. You can then get rid of the failed upgrade or temp mount it using bectl or beadm and try to fix it.
As pointed out above, ZFS likes RAM, the more the better. It also works better on 64bit systems.

UFS has the advantage of being around the block, lots of times. The fact that it was able to be tweaked (instead of trashed) as devices grew bigger, is a huge testament to the original design. How many things computer related, from back in the 1980s, still have original developer(s) keeping an eye on it? Kudos to Kirk for UFS. UFS has certainly proven to be a robust and safe (from a data perspective) filesystem. With softupdates and journaling, it can perform very well for a lot of workloads. With use of GEOM stuff you can create mirrors and other RAID types so you can create parity to native ZFS features. The only downside (for me) is lack of BEs, but there are folks working on that (if you go to the Blog section of this website I think there are some links).

Shorter story on practical terms:
ZFS if you have more than 4GB of RAM, a 64bit CPU and bigger disks. Most laptops today will meet these minimum specs.
UFS for truly embedded systems, or if you are truly limited on memory and aren't using petabytes of storage.
 
fraxamo Yep, that was one of the links I was talking about. It talks about more of a traditional Embedded A/B setup for doing upgrades, which is perfectly workable. Most people don't need to have more than a "current version" and "one back", even with ZFS BEs. Keeping too much old stuff around is clutter and can cause problems.

Thanks for posting that.

Oh, if you're a fan of Michael W Lucas books, his 2 on ZFS with Allan Jude and Storage Essentials has all kinds of good information about UFS and ZFS, the differences, best uses and best practices.
 
With UFS, you'd have to plan your partition size and location at install time, and you're generally stuck with that (unless you plan to re-install from scratch).
With ZFS, you have datasets instead of partitions. You can adjust min/max size any time after install, and location/offset limtations/presets are just not there.
 
With UFS, you'd have to plan your partition size and location at install time, and you're generally stuck with that (unless you plan to re-install from scratch).
With ZFS, you have datasets instead of partitions. You can adjust min/max size any time after install, and location/offset limtations/presets are just not there.
zfs allows you to do something like this (actual log on my desktop):

Code:
cd /root

truncate -s 3G poolfile

zpool create newpool /root/poolfile

zpool status newpool
  pool: newpool
state: ONLINE
config:

    NAME              STATE     READ WRITE CKSUM
    newpool           ONLINE       0     0     0
      /root/poolfile  ONLINE       0     0     0

zfs create newpool/my_dataset

df -H newpool
Filesystem    Size    Used   Avail Capacity  Mounted on
newpool       2.8G     98k    2.8G     0%    /newpool

zfs list|grep new
newpool                          504K  2.62G       96K  /newpool
newpool/my_dataset                96K  2.62G       96K  /newpool/my_dataset

truncate -s 3G pool_mirror

zpool attach newpool /root/poolfile /root/pool_mirror

zpool status newpool
  pool: newpool
state: ONLINE
  scan: resilvered 780K in 00:00:00 with 0 errors on Fri Apr 30 22:21:24 2021
config:

    NAME                   STATE     READ WRITE CKSUM
    newpool                ONLINE       0     0     0
      mirror-0             ONLINE       0     0     0
        /root/poolfile     ONLINE       0     0     0
        /root/pool_mirror  ONLINE       0     0     0

errors: No known data errors

zpool destroy newpool

ls -ltr pool*
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 poolfile
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 pool_mirror

rm pool*

etc...
 
zfs allows you to do something like this (actual log on my desktop):

Code:
cd /root

truncate -s 3G poolfile

zpool create newpool /root/poolfile

zpool status newpool
  pool: newpool
state: ONLINE
config:

    NAME              STATE     READ WRITE CKSUM
    newpool           ONLINE       0     0     0
      /root/poolfile  ONLINE       0     0     0

zfs create newpool/my_dataset

df -H newpool
Filesystem    Size    Used   Avail Capacity  Mounted on
newpool       2.8G     98k    2.8G     0%    /newpool

zfs list|grep new
newpool                          504K  2.62G       96K  /newpool
newpool/my_dataset                96K  2.62G       96K  /newpool/my_dataset

truncate -s 3G pool_mirror

zpool attach newpool /root/poolfile /root/pool_mirror

zpool status newpool
  pool: newpool
state: ONLINE
  scan: resilvered 780K in 00:00:00 with 0 errors on Fri Apr 30 22:21:24 2021
config:

    NAME                   STATE     READ WRITE CKSUM
    newpool                ONLINE       0     0     0
      mirror-0             ONLINE       0     0     0
        /root/poolfile     ONLINE       0     0     0
        /root/pool_mirror  ONLINE       0     0     0

errors: No known data errors

zpool destroy newpool

ls -ltr pool*
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 poolfile
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 pool_mirror

rm pool*

etc...
Argentum: Isn't this a bit of an overkill of an answer? I'd suggest hiding this in a spoiler. I think it would make the forums a bit more readable.
 
A lot, but trying to keep it short and simple.

Data integrity features, especially targeting bitrot.
Compression
Convenience (gone are the days of managing partition allocations)
Flexibility, so much is tunable with ZFS, including what data can get cached, and how its cached.
 
Beastie7 thanks. I hope to find it.



To the opening post:

What is the main difference of UFS and ZFS. …

<https://askanydifference.com/differ...en_Zettabyte_File_System_and_Unix_File_System> – either the nameless author(s) were drunk, or someone dropped a doughnut in the artificial intelligence that wrote the page.

More soberly, from How Solaris ZFS Cache Management Differs From UFS and VxFS File Systems:

… ZFS' use of kernel memory as a cache results in higher kernel memory allocation as compared to UFS and VxFS filesystems. Monitoring a system with tools such as vmstat would report less free memory with ZFS and may lead to unnecessary support calls.

(I do have an Oracle sign-in, but not enough to view full details.)

Use of memory for ZFS is sometimes misunderstood. Plenty of discussion elsewhere, however XenForo (here) can't seek phrases such as ZFS, and Google doesn't find what's required.

<https://old.reddit.com/comments/pvsu2w/-/hecksww/?context=1> re:
  • vfs.zfs.arc.sys_free
  • vfs.zfs.arc_free_target

For me, currently, one of the main benefits of ZFS is use of inexpensive USB flash drives for L2ARC, to reduce the bottleneck that would otherwise occur with a single hard disk drive in a notebook.
 
Journaling vs. Soft Updates, a Usenix 2000 paper …

Also, to help explain what's missing from the FreeBSD Handbook:
I do not have anything against UFS as I stated here - it still has its uses - https://is.gd/bsdstg - but You have to be REALLY low on memory to not use ZFS :)

For example I have run 2 x 2TB ZFS mirror with 512 MB RAM along with running other services for years and it was stable as rock.

People forget that ZFS without RAM is just as fast as the storage devices below it - with RAM its just faster thanks to ZFS ARC cache.

👍 and L2ARC ("not an ARC") is a joy.

PS for other readers,
 
Last edited:
… file systems: Many of then have an identifiable "central father figure" (say Ted Ts'o, Kirk McKusick, Chris Mason, Sage Weil, ...),

Don Brady was:
1638687341341.png

… they also have a deep bench of people who understand the "why" and the design. This makes their development process scalable.



Beyond the creators of ZFS: there's widespread understanding.

Not everyone will have the breadth of depth of understanding of a creator, but there's no lack of depth in specific areas.

… All eyes are at ZFS, which is a cop-out. …

👎

Not all.

ZFS is not a cop-out.
 
Back
Top