Other ZFS and UFS difference

fbsd_

Member

Reaction score: 2
Messages: 38

What is the main difference of UFS and ZFS. I couldnt find it on handbook
 

Zirias

Daemon

Reaction score: 1,188
Messages: 2,161

And for comparison with UFS: This is a solid "classic" filesystem. It does provide journaling, to protect data in case of crashes, power outages and the like. Very roughly speaking, it's comparable to Linux' ext4. It does NOT provide builtin RAID, checksumming, datasets, virtual volumes, snapshots, clones and all the stuff ZFS can do.

So, when to still prefer UFS? IMHO two possible reasons:
  • You don't have the RAM needed for ZFS' ARC to work well. A rule of thumb for a recommended minimum I've often seen is 1GB per TB of storage.
  • You have a special workload that performs much better on UFS. This should be pretty rare, but might happen.
 

Alain De Vos

Aspiring Daemon

Reaction score: 256
Messages: 984

It are two different filesystems like apples and pears.

From the wikipage of ZFS,
Copy-on-write transactional model
ZFS uses a copy-on-write transactional object model. All block pointers within the filesystem contain a 256-bit checksum or 256-bit hash (currently a choice between Fletcher-2, Fletcher-4, or SHA-256)[52] of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL (intent log) write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see Merkle signature scheme).

From Solaris on UFS,
UFS On-Disk Format
UFS is built around the concept of a disk's geometry, which is described as the number of sectors in a track, the location of the head, and the number of tracks. UFS uses a hybrid block allocation strategy that allocates full blocks or smaller parts of the block called fragments. A block is a set of contigous fragments starting on a particular boundary. This boundary is determined by the size of a fragment and the number of fragments that constitute a block. For example, fragment 32 and block 32 both relate to the same physical location on disk. Although the next fragment on disk is 33 followed by 34, 35, 36, 37 and so on, the next block is at 40, which begins on fragment 40. This is true in the case of 8-Kbyte block size and 1-Kbyte fragment size, where 8 fragments constitutes a file system block.
On-Disk UFS Inodes
In UFS, all information pertaining to a file is stored in a special file index node called the inode (except for the name of the file, which is stored in the directory). There are two types of inodes: in-core and on-disk. The on-disk inodes, as the name implies, reside on disk, whereas the in-core inode is created only when a particular file is opened for reading or writing.
The on-disk inode is represented by struct icommon. It occupies exactly 128 bytes on disk and can also be found embedded in the in-core inode structure
 

fcorbelli

Member

Reaction score: 31
Messages: 93

The main difference is that with zfs you can know if your data is corrupt or not.

Short version: if you have 4GB or more go with zfs.
It's like comparing a baseball bat (UFS) to an ICBM missile (zfs)
 

ShelLuser

Son of Beastie

Reaction score: 2,060
Messages: 3,773

A load of theory and IMO nothing really usefull.

The main difference between ZFS and UFS is that ZFS allows you to use several virtual partitions (the official term being datasets) which can provide often needed separation between different sections, but unlike with UFS you never risk wasting diskspace because from a physical perspective you're still using 1 main filesystem (the so called "pool") for your system.

Example: It's common practice to separate /var from the main system so that you don't risk logfile or database corruptions whenever a dumb user tries to fill up your system (this is also why you'd normally separate /home but...).

So what would happen if you dedicated 5Gb for /home, 2Gb for /var and after 1 month of usage you suddenly realize that your users only gobble up 2Gb tops whereas some of your system databases expand quite heavily?

There's not much you can do on UFS in such a situation, not without taking down the system and trying to change the whole partition table. On ZFS this wouldn't matter because although /home and /var are separate they'd still use the same ZFS pool. So you don't risk wasting any storage space.

This is IMO the main advantage why it makes sense to use ZFS instead of UFS. Everything else is icing on the cake.
 

mer

Member

Reaction score: 23
Messages: 46

Following is all my opinion, based on my own experience with FreeBSD, UFS and ZFS. Feel free to agree or disagree.

Another item on the practical side of the ledger is with ZFS you get Boot Environments (BEs). If you've ever stuffed up a system upgrade, ZFS BEs show you how system upgrades are meant to be.
simply reboot the system, stop at boot loader, select the previous BE, continue booting, and you are back where you were before starting the upgrade. You can then get rid of the failed upgrade or temp mount it using bectl or beadm and try to fix it.
As pointed out above, ZFS likes RAM, the more the better. It also works better on 64bit systems.

UFS has the advantage of being around the block, lots of times. The fact that it was able to be tweaked (instead of trashed) as devices grew bigger, is a huge testament to the original design. How many things computer related, from back in the 1980s, still have original developer(s) keeping an eye on it? Kudos to Kirk for UFS. UFS has certainly proven to be a robust and safe (from a data perspective) filesystem. With softupdates and journaling, it can perform very well for a lot of workloads. With use of GEOM stuff you can create mirrors and other RAID types so you can create parity to native ZFS features. The only downside (for me) is lack of BEs, but there are folks working on that (if you go to the Blog section of this website I think there are some links).

Shorter story on practical terms:
ZFS if you have more than 4GB of RAM, a 64bit CPU and bigger disks. Most laptops today will meet these minimum specs.
UFS for truly embedded systems, or if you are truly limited on memory and aren't using petabytes of storage.
 

mer

Member

Reaction score: 23
Messages: 46

fraxamo Yep, that was one of the links I was talking about. It talks about more of a traditional Embedded A/B setup for doing upgrades, which is perfectly workable. Most people don't need to have more than a "current version" and "one back", even with ZFS BEs. Keeping too much old stuff around is clutter and can cause problems.

Thanks for posting that.

Oh, if you're a fan of Michael W Lucas books, his 2 on ZFS with Allan Jude and Storage Essentials has all kinds of good information about UFS and ZFS, the differences, best uses and best practices.
 

astyle

Active Member

Reaction score: 33
Messages: 117

With UFS, you'd have to plan your partition size and location at install time, and you're generally stuck with that (unless you plan to re-install from scratch).
With ZFS, you have datasets instead of partitions. You can adjust min/max size any time after install, and location/offset limtations/presets are just not there.
 

Argentum

Well-Known Member

Reaction score: 183
Messages: 395

With UFS, you'd have to plan your partition size and location at install time, and you're generally stuck with that (unless you plan to re-install from scratch).
With ZFS, you have datasets instead of partitions. You can adjust min/max size any time after install, and location/offset limtations/presets are just not there.
zfs allows you to do something like this (actual log on my desktop):

Code:
cd /root

truncate -s 3G poolfile

zpool create newpool /root/poolfile

zpool status newpool
  pool: newpool
state: ONLINE
config:

    NAME              STATE     READ WRITE CKSUM
    newpool           ONLINE       0     0     0
      /root/poolfile  ONLINE       0     0     0

zfs create newpool/my_dataset

df -H newpool
Filesystem    Size    Used   Avail Capacity  Mounted on
newpool       2.8G     98k    2.8G     0%    /newpool

zfs list|grep new
newpool                          504K  2.62G       96K  /newpool
newpool/my_dataset                96K  2.62G       96K  /newpool/my_dataset

truncate -s 3G pool_mirror

zpool attach newpool /root/poolfile /root/pool_mirror

zpool status newpool
  pool: newpool
state: ONLINE
  scan: resilvered 780K in 00:00:00 with 0 errors on Fri Apr 30 22:21:24 2021
config:

    NAME                   STATE     READ WRITE CKSUM
    newpool                ONLINE       0     0     0
      mirror-0             ONLINE       0     0     0
        /root/poolfile     ONLINE       0     0     0
        /root/pool_mirror  ONLINE       0     0     0

errors: No known data errors

zpool destroy newpool

ls -ltr pool*
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 poolfile
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 pool_mirror

rm pool*

etc...
 

Argentum

Well-Known Member

Reaction score: 183
Messages: 395

This looks a bit like a loop device.
Actually, you can have ZFS on almost anything, but kitchen sink. Works fine on files and this is good for VM-s. In fact I have bootable VM-s based on files having ZFS on these files.
 

astyle

Active Member

Reaction score: 33
Messages: 117

zfs allows you to do something like this (actual log on my desktop):

Code:
cd /root

truncate -s 3G poolfile

zpool create newpool /root/poolfile

zpool status newpool
  pool: newpool
state: ONLINE
config:

    NAME              STATE     READ WRITE CKSUM
    newpool           ONLINE       0     0     0
      /root/poolfile  ONLINE       0     0     0

zfs create newpool/my_dataset

df -H newpool
Filesystem    Size    Used   Avail Capacity  Mounted on
newpool       2.8G     98k    2.8G     0%    /newpool

zfs list|grep new
newpool                          504K  2.62G       96K  /newpool
newpool/my_dataset                96K  2.62G       96K  /newpool/my_dataset

truncate -s 3G pool_mirror

zpool attach newpool /root/poolfile /root/pool_mirror

zpool status newpool
  pool: newpool
state: ONLINE
  scan: resilvered 780K in 00:00:00 with 0 errors on Fri Apr 30 22:21:24 2021
config:

    NAME                   STATE     READ WRITE CKSUM
    newpool                ONLINE       0     0     0
      mirror-0             ONLINE       0     0     0
        /root/poolfile     ONLINE       0     0     0
        /root/pool_mirror  ONLINE       0     0     0

errors: No known data errors

zpool destroy newpool

ls -ltr pool*
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 poolfile
-rw-r--r--  1 root  wheel  3221225472 Apr 30 22:24 pool_mirror

rm pool*

etc...
Argentum: Isn't this a bit of an overkill of an answer? I'd suggest hiding this in a spoiler. I think it would make the forums a bit more readable.
 

chrcol

Well-Known Member

Reaction score: 50
Messages: 471

A lot, but trying to keep it short and simple.

Data integrity features, especially targeting bitrot.
Compression
Convenience (gone are the days of managing partition allocations)
Flexibility, so much is tunable with ZFS, including what data can get cached, and how its cached.
 
Top