UFS Why I am using UFS in 2021.

I found UFS SU+J unreliable when shortly testing it on 11.0 (fsck(8) found serious issues repeatedly)
There was a bug that triggered such problems under certain circumstances. Kirk fixed it three years ago by this commit.
I particularly remember that commit because that code is one of the very few cases where the EDOOFUS error code is used (errno 88 on FreeBSD). :)

UFS with SU+J works very stable for me, and the performance is pleasant.
 
Why not UFS for system and ZFS for data files (/home, /var).
This is exactly what I did on my home server. A geom mirror of two smallish SSDs for the boot volume and system directories, and a large ZFS pool for everything else. The beauty of geom mirrors is that I have in effect, two boot drives. The most I have to do should one of them fail is switch to the other one in the BIOS*. It's also a nice place for your ZFS ZIL.

However, I probably would use ZFS for my boot volume too if I were to set that server up today. It was my first experience with ZFS and I wanted to go slow with it because I have Linux LVM PTSD.

* EDIT: Yes, I tested this
 
Why use ZFS, even on a laptop? Reason #1 for me: Checksums. I don't trust any hardware completely, because I know too much about it.

But my home server is still using UFS for the root/boot file systems, and ZFS only for /home and various data file systems. Why? When I installed it (about 5-6 years ago), booting from ZFS was still an art form, today it is routine. Just like Jose, my boot/root file systems are on two mirrored SSDs, except that I don't use geom mirrors, instead I regularly hand-copy them with dd. That is not a good setup, and I don't recommend it, but again, it's for historical reasons.
 
The power of ZFS comes really when you have 5 identical disks. Not a home configuration.
Also booting is easier from UFS.
So I use UFS as root and ZFS for data and external USB disks.
 
Please
  • post a photocopy of your electricity bill in It's all about jokes, funny pics...
  • also add an accurate measurement protocol of the noise in your home lab using reliable instruments from vendors of reasonable good reputation.
Do you also have a Diesel-powered UPS?
Computers are my hobby as well as my profession. I'm lucky in this regard. I spend the money other people would spend on fancy cars, etc. on my power bill and I'm happy about it. Yeah, it's loud in here, but not so loud that it bothers me when I'm listening to something on my headphones.

All my UPS batteries eventually leaked their acid on to my floor, probably due to lack of maintenance. I'm on the same grid as a VA hospital so I lose power very, very rarely. Maybe three times in 25 years. UPS is a low priority for me.
 
I don't have a single machine right now without such a restriction. The reason is in practice, ARC is somewhat "reluctant" with returning memory, and this can impair performance of other things. It might be fine for a machine only used for storage, or if you have a lot more memory than needed. With unrestricted ARC, my desktop (8GB RAM) ran into heavy swapping when left running for 2 or 3 days.
I might have to revisit these settings with FreeBSD 13. Turns out OpenZFS' ARC behavior differs a lot from the older implementation:
  • Performance will suffer more when the ARC can't get RAM it would need
  • On the other hand, OpenZFS' ARC returns memory a lot quicker
So, I now doubled vfs.zfs.arc_max on my server to 24G and so far, it works flawless with much improved FS performance. Will have a look at that for a while (let's see what really happens in memory pressure situations).
 
I installed FreeBSD on a cheap 16G Sandisk stick on VirtualBox using raw vmdk. I tried with ZFS and it didn't fail me:

Code:
$ gpart show
=>      40  30031792  ada0  GPT  (14G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  25833472     3  freebsd-zfs  (12G)
  30029824      2008        - free -  (1.0M)

I found with ZFS and enough RAM (4G is enough) it will allow me to install more packages than UFS thanks to compression. And also thanks to ZFS caching, the performance is acceptable for this cheap USB 2.0 stick. I really surprised how good it served me.

I think there is no use for UFS anymore, except for low RAM system like cheap VPS.
 
ZFS does not play nice with sendfile(2). It should be avoided for systems that depend on that syscall. Two that I know of are Kafka and Varnish.
That's interesting. Thanks for mentioning that.

I wonder if things have improved with OpenZFS. That said, this is just not going to be an issue on light workloads. I use sendfile() with ZFS all the time through nginx.

Now what was broken was sendfile() on ext2fs in 12.1. It would cause panics very easily.
 
I have been using zfs since its first appearance, and currently still on Solaris servers. I manage numerous physical and virtual servers, in short, I am a storage manager. I see no real reason not to use zfs, if the hardware (especially RAM) allows it. We all know the reasons, I don't repeat it. On the backup and security side (which is my area) I can give some suggestions. Surely the method of mirroring and resilvering and scrubbing is the reason alone enough. Also the absence of scandisk (which actually exists for deduplicated volumes), plus snapshots etc. As for the backup mechanisms, I normally use many of them. Both the zfs replicas, the legendary rsync, and zip, and my specific fork of a versioned archiving program, and with advanced copy integrity control functions. In short, zfs is not perfect but, despite everything, I consider it far superior to UFS.

PS Limiting the maximum use of ARC is good and right, especially on machines that are used for virtualization. It really happens, as mentioned above, to have freezes (actually more virtual machine termination rather than crash of the creative system) if you don't prevent competitions
 
If anyone is interested I can write my experience in BSD and zfs backup, so as to compare ideas and, perhaps, learn a little more.
I'm new to the forum and I don't want to make a gaffe.
Is it usual to continue in a thread like this, or is it better if I open one ad hoc?
 
I finally found a usecase for UFS+SUJ. It's "very old hardware" :D
 
If anyone is interested I can write my experience in BSD and zfs backup, so as to compare ideas and, perhaps, learn a little more.
I'm new to the forum and I don't want to make a gaffe.
Is it usual to continue in a thread like this, or is it better if I open one ad hoc?

Given the topic of this thread, your experiences would fit better in their own thread. I think you could open one in the "Storage" forum. I'd be interested to read it.
 
TLDR;
I was in the same boat sometime back, having all my storage in UFS and using dump for backup as it was painless. But with moving all my services into individual jails, I have moved to ZFS except root (system).
I am not sure why the OP is so much worried about a minute or two between the different backup options. For me, I only run my tape backups once a week.

What I found difficult with dump was to restore and analyse the contents of the backup. So I opted for tar where I have the flexibility of extracting one or two required files in needed into my current working system.

IMHO while UFS and dump are great utilities for mostly static systems, for dynamic systems like mine I found the operational ease from ZFS outweighs the UFS/dump combination.
 
Back
Top