The installer favorites the adoption of ZFS instead of UFS...

2020 is not exactly 15 years ago... 😂
ZFS was introduced to FreeBSD in 2008 in FreeBSD 7.0, it was set to the "default" on supported systems three years ago. But I believe Alex Seitsinger was referring to ZFS as a technology, not to the length of time it had been the "default"...

So it's not yet hassle-free, is it? That may have to do with the tools and docs too. I noticed that the terms used in ZFS are not always very comprehensible. For instance, the term 'resilver' in case of a new disk. Are we going to plate that thing with silver or is it about (re)initializing it? Same with 'scrub'. With water and a brush or is it just a file system check? Many terms are far from self-explanatory, hence users will have to read all the docs. They could have made things easier.
ZFS was designed as an enterprise filesystem, for people who research their requirements and plan accordingly. The biggest downfall I've seen (across FreeBSD, FreeNAS, and Linux) is that people do stuff (like setting up specific RAID layouts) without really thinking about what they want to do, and are then shocked when ZFS doesn't suddenly perform magic when they find out they didn't actually gather their requirements, plan, and execute.
That being said, some of the biggest of these issues are being worked on, and it's improving all of the time.

Letting ZFS take over the entire disk was the best decision I made...I already forgot everything I knew about disk partitioning, and I'm on cloud 9 from that! 😂
It doesn't sound like you're using RAID of any kind from one of your other posts. But just be aware that disks are often not exactly the same size, so if in the future you try and mirror your ZFS disk and the new disk is slightly smaller you will be unable to mirror it. Usually it's recommended to make a single partition that's a few megabytes smaller than the disk to make 100% sure that you can add/replace a mirror at a later date.

Apart from snapshots, do you use other ZFS-features like compression, encryption and raid?
I've not tried encryption yet, but have used snapshots (as well as things that rely on snapshots/snapshotting like Boot Environments, clones, send/receive), compression, raid, deduplication (a long time ago), quotas... The docs are good, when I need to do something I find the man pages pretty good, and the books by Michael Lucas are also great.
That all being said I learnt ZFS from people that worked on it in Solaris while I was at university, I count myself lucky to have learnt it so early from people that intimately knew it - it's just a shame I've used it mostly at home on account of working mainly in Linux shops that tend to mandate whatever the default file system in on whatever Linux distro we use.

Apart from snapshots I would not be using most of these features. Compression allows for efficient use of disk space, which is a good thing. OTOH, it won't speed things up and in the long run the additional electricity cost of (de)compression may set you back more than a bigger disk just once. It all depends on your needs.
As mer said, typically it is worth it. This article has some nice explanations on how compression works, and it's important to note that ZFS doesn't blindly try to compress everything.
The article sets it up and explains it better than I could, but is ZFS can't get a good enough compression ration then data is stored in its original form and incurs no additional CPU overhead.
 
Does de-compression of a ZFS-compressed 5 GB dataset take hours on a Threadripper? What's the going rate for watt per hour in your country? A Threadripper requires at most 280 watt to run. The math is messed up, sorry to say.
There are other reasons to use compression such as being limited by the number of disks you can fit into a chassis.
Other scenarios include application specific machines. I have a few datasets with compression ratios in the neighborhood of 20x.
Another reason can be slow I/O: It's probably not a common scenario but if you're dealing with a system which has slow I/O (such as on bulk cold storage) compression allows to reduce the number of IOPS and provide faster data access (if the data is reasonably compressible).
 
Of course, UFS' SU-J offers a solution (considered a "better" one than the separate GEOM journaling offered earlier), and from all I know it gets the job done.

In SU+J the protection against corruption comes from soft-updates. The minimal journalling in the +J part just speeds-up the time taken to recover lost storage, reducing a foreground fsck preen from hours to seconds. gjournal is full data journalling with optional journal checksumming. I'd be interested to know why the former is considered better.

Personally, I wouldn't want to run ZFS without redundancy (or at least frequent full back-ups) because of the way it amplifies corruption - turning a bit-flip into a lost file.

Using gmirror on a new install is a wasted opportunity IMO.
 
Personally, I wouldn't want to run ZFS without redundancy (or at least frequent full back-ups) because of the way it amplifies corruption - turning a bit-flip into a lost file.
One "nice" thing is you can get such redundancy on a single disk - this isn't redundant against disk failure, but for the example you give you can ask ZFS to store multiple copies of the data so it can heal a bad copy later.

But also, while ZFS will tell you that the file is corrupt, most other file systems don't. So if the software you are using to read/manipulate that file doesn't figure out it's corrupt you could be unknowingly be using and relying on a corrupted data that you think is fine.
 
This thread has started because for my second installation I said to myself: «if I use mirror I am going to have lesser space than before (I had the two disks striped), and besides BECTL I hardly use any other features» — I am very bad at backups. 😩

So naively I expected to find the same functionalities on the UFS menu, and when I didn't find anything and some dude redirected me to the handbook for learning ho to achieve that in the ZFS installer it is just a couple of enters, it hasn't been very hard moving back and selecting ZFS instead. 😅

At this rate you could directly remove the entire UFS entry... 😝
 
In SU+J the protection against corruption comes from soft-updates.
Depends on how you put it. Soft updates ensure only corruptions can occur that fsck can reliably repair ... that's how I would put it.
The minimal journalling in the +J part just speeds-up the time taken to recover lost storage, reducing a foreground fsck preen from hours to seconds.
Which is because the only thing necessary then is a journal replay, like with any journaling implementation. Some filesystems do this themselves on mount, ufs relies on fsck to do it, in any case it isn't a real file system check/repair any more.
gjournal is full data journalling with optional journal checksumming. I'd be interested to know why the former is considered better.
For a technically detailed answer, some filesystem expert should probably speak up ... anyways, very roughly, the fact that gjournal works on the block layer (with some help from the filesystem, otherwise it couldn't even know what would be an inconsistent intermediate state) leads to some drawbacks. Journaling integrated with the filesystem is the more efficient design.
Personally, I wouldn't want to run ZFS without redundancy (or at least frequent full back-ups) because of the way it amplifies corruption - turning a bit-flip into a lost file.
Uhm, it only does for errors it can't automatically correct? :-/ A single bit-flip will be corrected, still reported. In any case, it avoids accidentally working with corrupted data, which could have much worse effects than just losing a file.
 
At this rate you could directly remove the entire UFS entry... 😝

This is what bothers me. This is nothing like choosing a political party or favourite breakfast cereal, where it's all some one thing or all the other.

Different people prefer, or are used to, or comfortable with ZFS and/or UFS filesystems, for different types of machines serving different purposes.

There's absolutely no need to consider removing support, or access to the wealth of good documentation or tools for UFS just because more people now choose ZFS from bsdinstall.

There's a more general tendency where some developers and others running latest kit talk about 5 yo systems - using the deplorable M$ "legacy" put-down - as if they were medieval.

Then there are old farts like me and quite a few others, who may have surfed the bow wave 20y ago and are now content to keep using 10 yo laptops that still work fine, like my 2002 Thinkpad T23 which only died in 2021, its 40GB HDD still working fine in an USB enclosure.

bsdinstall on UFS also still works fine. My refurb T430s has a shrunk Win10, a common msdosfs slice and a freebsd slice with multiple partitions, setup with bsdinstall's guided UFS installation without drama, with boot0cfg as boot manager.

Same story with the Handbook; please let's not toss out good documentation on many issues where choices like ZFS|UFS, GPT|MBR or which desktop depend on equipment, purpose, experience and preference - not One Size Fits All.

[/rant]
 
It doesn't sound like you're using RAID of any kind from one of your other posts. But just be aware that disks are often not exactly the same size, so if in the future you try and mirror your ZFS disk and the new disk is slightly smaller you will be unable to mirror it. Usually it's recommended to make a single partition that's a few megabytes smaller than the disk to make 100% sure that you can add/replace a mirror at a later date.
Yep, this is what I've been doing. I think with SSDs having a little bit of unused space may also help in the whole wear leveling algorithms.

I ran across this shortly after it was written and pretty much have followed the principle since.

Edit:
I agree with smithi in post 32.
 
I don't, I think it's full of unfounded fears.

Regarding "extra features" in the installer, it's not like they were there previously. Supporting some "advanced" configurations directly from the menus is nice of course (just making it more comfortable), and sure it would be nice to add some more using UFS. If someone would do some work here, I'm pretty sure nobody would be opposed to merging a patch. But ZFS as the "default" choice still makes sense, and putting more effort in making that more comfortable also makes sense, because more people will benefit from it: While there are well known (more or less?) scenarios where UFS is the preferred choice (like very limited system resources or some specific workload that performs better with UFS), for most users, ZFS would be the better choice.

Regarding "legacy", nobody ever called UFS legacy and I'm pretty sure nobody will any time soon. It's a whole other story than e.g. MBR, which of course is legacy, there's really no other word to describe it. It was designed for classic x86 PCs. It offers a whole 512 bytes(!) to hold both the boot code (which by definition must be x86 real-mode code) and a fixed-size partition table with exactly 4 entries. For decades, no system can boot any more directly from MBR, all it's "good for" nowadays is hold some stub chainloading the "real" boot-code from some partition, so this has been a hackish workaround for ages. It's still supported for old machines that can't do anything else (and incidently, I need it on m old desktop, which offers UEFI, but buggy enough so it doesn't succeed booting FreeBSD). Thankfully, you can at least combine booting from MBR with a GPT for partitioning, the partition table in the MBR then just holds some dummy "protective" data to mark the disk space used.
 
zirias@ I should have been more specific. I don't disagree with what you've written, but I agree with this part:
There's absolutely no need to consider removing support, or access to the wealth of good documentation or tools for UFS just because more people now choose ZFS from bsdinstall.
 
… compare the number of ZFS-related issues to the number of UFS-related. …

Many more eyes on ZFS.

… I ran across this shortly after it was written and pretty much have followed the principle since.
<http://www.freebsddiary.org/zfs-with-gpart.php> …

From <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=261212#c21>:

  • if FreeBSD-provided advice (e.g. partitioning) will vary from the OpenZFS recommendation to use a whole disk (not a partition), then the variation should be explained.

dvl@ not for you to answer, just FYI.
 
Back
Top