^ All of this is just a taste of the learning curve and complexity I mentioned. If you want redundancy on a single disk, you need to increase the "copies" property. Doing that eats up more disk space. Now you need to more closely monitor your disk space, as all datasets---including snapshots and clones---will take up twice the space. (The only exception is zvols). You could of course use compression, but that only works with certain types of files. (To put it plainly, compression won't work on the sorts of files that eat up the greatest amount of disk space).
So now, at first, you're filling your disk twice as fast. How much data do you have on the disk now, and how much do you expect to write in the foreseeable future? If you set up automated snapshots, that disk will continuously fill on its own as you manipulate your data; the simple act of copying a directory could double or triple the amount of space it consumes. In some cases, you now have three (or four) copies instead of merely two (or three). If your disk is already over 50% full, you might have to pick and choose which data you protect with copies={2,3}
. This means you have to figure out which directories you want multiple copies of and which you consider more "expendable," and then set up separate filesystems for each. Which in turn makes the list generated by zfs list
longer and more difficult to parse, and also increases the number of filesystems you need to configure for automated snapshots, which in turn increases the number of snapshots on your system, making the list of snapshots you need to read through if you're looking for what's eating space even longer, and complicating backup/restore procedures... And so on.
This isn't as horrorible as I might make it sound. It is manageable. But if you use ZFS on a one-disk desktop/laptop or in a low-capacity mirror, and make use of even a small number of its great features, you absolutely will be spending at least a little more time managing your storage than you ever have before. With a traditional filesystem and typical desktop workflow, a 1Tb disk will give you enough space that you don't need to think about it. The amount of data you store and the amount of storage space consumed is 1:1; if your disk is half-full now, and you add a few gigabytes, so what? ZFS changes that---each of its features eats up more space as time goes on, and predicting how much space will be consumed for different operations isn't really feasible. And all this for potentially no great benefit---are you "protecting" files that you'll probably end up deleting anyway? Does the total amount of data you want to protect with ZFS take up a mere 10% of your total disk space? How many times have you ever permanently lost a significant amount of important data to filesystem or disk failure, how frequently do you expect it to happen in the future, and how is using a more complex ZFS setup a better way to prevent such loss than routine backups?
It's not a trivial concern, and so the question "Should I use ZFS or UFS?" isn't a simple matter of taste or of one being objectively superior to the other. My personal opinion: If you don't know whether ZFS is useful in this case, it's probably not.