The installer favorites the adoption of ZFS instead of UFS...

Please don't take this too seriously... 🙏

But it would nice if when you select the UFS auto installer you could select: encryption, soft mirror, etc. As you can when you select ZFS... 🤷‍♂️

Which is the first option even though is the last letter of the alphabet... 🙄

It looks like there is a favoritism toward a filesystem over another... 🤔

Well, that's all... 🤪
 
I'd actually appreciate learning why ZFS is preferred over UFS. I can come up with a few reasons on my own, but I would appreciate a technical or statistical explanation even more. ZFS DOES require more of the host system and ZFS is new enough to warrant complaining about syntax unfamiliarity.
 
Looks like it was changed to be the default in early 2020 (https://reviews.freebsd.org/D23173) because of something in this talk:
View: https://www.youtube.com/watch?v=LVf9TGv7e6c

I haven't listened to the talk, just followed the review through the commit message, over to the Linux Conf AU subsite for 2020 and found the talk.
I suspect you'll learn why it was suggested, though perhaps what is missing is discussion around whether it's was a good idea.

I suspect that Auto (ZFS) gives more options for two reasons:
  1. ZFS provides some of these features natively (i.e. without other tooling)
  2. Whoever wrote the Auto (ZFS) part of the installer wanted these features and didn't use UFS enough want/know how to add comparable configuration
ZFS is new enough to warrant complaining about syntax unfamiliarity
15 years (in FreeBSD) is still new enough? I'd love to know what you consider to be not new! 😜
 
15 years (in FreeBSD) is still new enough? I'd love to know what you consider to be not new! 😜
2020 is not exactly 15 years ago... 😂

Admittedly, I am a ZFS fanboi, but hey, ZFS did solve a LOT of things for me. With UFS (or anything other than ZFS, frankly) I had to spend hours making sure I get the ducks in a row - no need to do that any more, I can get FreeBSD from 0 to boot in 5 minutes.
 
With UFS (or anything other than ZFS, frankly) I had to spend hours making sure I get the ducks in a row
Maybe you should keep your ducks somewhere else.
I never really had problems with UFS (with soft-updates, no journaling), apart from a power outage. Installing is hardly an issue, freebsd-update does a great job. It's in fact the sheer number of ZFS-related issues in this forum that keeps me from trying, so I agree with Alex Seitsinger on that point.
 
I never really had problems with UFS (with soft-updates, no journaling), apart from a power outage.
Something that would never be "acceptable" to me. A simple power-outage corrupting a file system is so last century ...
Of course, UFS' SU-J offers a solution (considered a "better" one than the separate GEOM journaling offered earlier), and from all I know it gets the job done. But I remember it was once broken (IIRC around 11-CURRENT), making corruption more likely. Just mentioning it to point out the age of UFS is no guarantee it would never have bugs. UFS is a good "classic" filesystem, doing the minimum expected nowadays. ZFS is a lot more, and there are tons of use cases for its features, e.g. efficient "clean" building with poudriere and boot environments both only make sense with ZFS' cheap and performant clones of snapshots, to name just one.
It's in fact the sheer number of ZFS-related issues in this forum that keeps me from trying
Do you have examples? I mean, examples that don't turn out to be "user error"? Of course there can be bugs in ZFS as well (see also above for UFS...), but in my experience, they are reasonably rare.
 
the sheer number of ZFS-related issues in this forum that keeps me from trying
Those ZFS related issues are mostly user errors.

ZFS requiers reading the documentation to understand it better. A good start is the ZFS chapter in the handbook, followed by the manuals.

No need to install on bare metal to try it out, install in bhyve or VirtualBox, exercise the examples from the handbook. In time, after being familiar with it, one does not want to miss it.
 
I'm going to agree with T-Daemon I've been using FreeBSD for a while, when ZFS came in around FreeBSD-9 I switched to that for at least my home directories and other data directories. Set up a mirror and have migrated that to larger devices and now on 13.x.
ZFS DOES require more of the host system
Yes and no. The more physical RAM you have the happier ZFS can be, but it is highly dependent on the specific workload. Example is pfSense: firewall device, based on FreeBSD has defaulted to ZFS on their appliances. These appliances are "limited" small physical devices, 2G RAM is typical. Average iPhone has more storage and memory :)

ZFS on system devices gives you the advantage of Boot Environments, which in my opinion are the best and easiest way to safely do system upgrades.

UFS has been around for a long time, the fact that it's still here is a testament to the quality of the code. Encryption, mirroring are all provided by GEOM classes while in ZFS they are native.
Arguments can be made as to which way is better.

Oh as for documentation:
FreeBSD Mastery: ZFS and FreeBSD Mastery:Advanced ZFS by Michael W Lucas and Allan Jude are some of the best references around.
 
I like both UFS or ZFS filesystem, but I have to admit that running UFS doesn't required me to learn "how to use it" it's simple and works OTB, ZFS on the other hand is a different beast and even after 2 years, beyond BE and snapshots, it feels like I don't even use 20% of what it can do and I probably never will because I don't need everything it has to offer.

freezr 's idea is not bad TBH, but I really doubt it's will be ever implemented, FreeBSD it's more about ZFS than UFS.
People get to FreeBSD because of the stability, jail, and ZFS.
May be I am wrong about this but I suppose there are more efforts and ressources for ZFS than UFS.
I really do appreciate UFS, it's what I use for my laptop and bhyve's VM (because it feels weird ZFS virtualized on top of ZFS and also because I don't want to spend too much time with administrative tasks for a VM).
 
A simple power-outage corrupting a file system is so last century ...
In this particular case it happened 5-6 seconds after a bigger ports upgrade finished. Matter of uncommitted soft updates, I suppose. I doubt if ZFS would do better in such a case.

Do you have examples? I mean, examples that don't turn out to be "user error"?
Just compare the number of ZFS-related issues to the number of UFS-related. I want a file system to be reliable and effortless. I do believe ZFS is reliable by now, but judging by the number of issues it is not quite effortless yet. Whether or not these are user errors doesn't matter in that respect.
 
In this particular case it happened 5-6 seconds after a bigger ports upgrade finished. Matter of uncommitted soft updates, I suppose. I doubt if ZFS would do better in such a case.
I'd take any bet it does. As does UFS with SU-J enabled. Journaling is designed to recover from interruption no matter when, a classic approach is using an "intent log" to implement it.
Just compare the number of ZFS-related issues to the number of UFS-related.
So, again, how many of them are NOT user error?
I want a file system to be reliable and effortless. I do believe ZFS is reliable by now, but judging by the number of issues it is not quite effortless yet.
It certainly is. You can use lots of features ZFS is offering (and some people not properly reading docs first struggle with again and again), but you don't have to.
 
ZFS features. I agree that one doesn't have to use any of them, it works just fine as a filesystem.
The various RAID configurations, Boot Environments and snapshots for backup are about the most common things that people use.
How to use them are well documented.

Back to the OP and UFS: My understanding of the way the GEOM stacking works is that you need to do things in the correct order. Say you want to set up a gmirror:
you should be setting up the mirror devices first and then start using gpart and then creating filesystems.
 
some people not properly reading docs first struggle with again and again
So it's not yet hassle-free, is it? That may have to do with the tools and docs too. I noticed that the terms used in ZFS are not always very comprehensible. For instance, the term 'resilver' in case of a new disk. Are we going to plate that thing with silver or is it about (re)initializing it? Same with 'scrub'. With water and a brush or is it just a file system check? Many terms are far from self-explanatory, hence users will have to read all the docs. They could have made things easier.
 
Please don't take this too seriously... 🙏

But it would nice if when you select the UFS auto installer you could select: encryption, soft mirror, etc. As you can when you select ZFS... 🤷‍♂️

Which is the first option even though is the last letter of the alphabet... 🙄

It looks like there is a favoritism toward a filesystem over another... 🤔

Well, that's all... 🤪

One thing I miss in the FreeBSD installer is an option to install GELI encrypted install on UFS.
 
Maybe you should keep your ducks somewhere else.
I never really had problems with UFS (with soft-updates, no journaling), apart from a power outage. Installing is hardly an issue, freebsd-update does a great job. It's in fact the sheer number of ZFS-related issues in this forum that keeps me from trying, so I agree with Alex Seitsinger on that point.
Letting ZFS take over the entire disk was the best decision I made - zero headaches chasing down what FreeBSD will report about my SSDs. All the ZFS-related issues that I have seen ppl vent about - they ALL stem from either not understanding snapshots or treating ZFS as just another drop-in replacement for UFS inside something called "disk partitions". I already forgot everything I knew about disk partitioning, and I'm on cloud 9 from that! 😂
 
I'd actually appreciate learning why ZFS is preferred over UFS. I can come up with a few reasons on my own, but I would appreciate a technical or statistical explanation even more. ZFS DOES require more of the host system and ZFS is new enough to warrant complaining about syntax unfamiliarity.
Probably, the preference of ZFS over UFS as its "first choice" in the installer may also be because ZFS is a more securely contained
or compartmentalized (tempted to say "containered") part inside the OS; more suitable for filesystem management from within a jail.

Jails - Micheal Lucas; book, p. 137:
Remember, not all filesystems can be managed from within a jail. Identify jailsafe filesystems by running lsvfs(8) and looking for the “jail” flag. Some common filesystems like FAT, cd9660, NFS, and even UFS cannot be safely managed from within a jail. They are tightly integrated into the virtual memory system in such a way that they cannot be easily contained. ZFS was written for virtualizing operating systems, so making it jail-safe was fairly straightforward. You can manage synthetic filesystems like devfs, fdescfs, and tmpfs, as well as FUSE and (sadly) procfs from within a jail. You can also manage Linux synthetic filesystems like linprocfs and linsysfs, but not EXT-based filesystems. While an unsafe filesystem cannot be managed from within a jail, it can provide data storage to a jail. Mount and manage such filesystems from the host.

Likewise on the web:
20 Years of FreeBSD Jails (2019) - Michael Lucas (@ca 29 min)
You can also delegate filesystem management to Jails, lsvfs(8) displays jail-safe filesystems. Now, it doesn't mean that you can't run a jail on a filesystem that is not jail-safe. For example, UFS is not jail-safe; you can run UFS under your jail just fine. What you can't do is let the jail manage UFS because it has fingers all through the virtual memory stack and it is not meant to be subdivided like that. ZFS was designed from the ground up to be manageable.


[...] ZFS is new enough to warrant complaining about syntax unfamiliarity.
If you want to/need to combine encryption, lvm, raid, journaling, comparable data integrity(?), BEs (?) etc. that were mostly developed not as an integrated whole on FreeBSD, then try comparing those combined syntaxes to ZFS' syntax. My guess is that someone new to Unix (FreeBSD), new to a particular one of those extended functionalities, or for that matter switching from a similar functionality on one OS to the other (that used to be even from one brand/type of raid controller to another), is likely to be confronted with unfamiliar syntax that was not developed with one consistent syntax in mind.
 
you can always drop to the installer shell/livecd and play with geom stuff and create raid vols, encrypted vols and whatnot and then install on them
Yep, but I think the "problem" (not really a problem) is that one needs to be above average in knowledge on it.
Sometimes people just want a "press this button". It's not right, it's not wrong, it just is.

I honestly have no opinion on the ask in the original post.
 
Apart from snapshots, do you use other ZFS-features like compression, encryption and raid?
Haven't gotten around to playing with those - would be fun to do it at some point.

Edit: snapshots, compression, encryption and RAID - they're not limited to ZFS. Flexibility to grow and shrink datasets on demand (even post-install) - now that is ZFS-specific. Would be fun to see if I can ditch NFS for ZFS :p
 
I'm not astyle or answering for them, but compression is usually enabled by default on ZFS datasets, different algorithms are the default on different ZFS versions but I think all ZFS versions understand all compression algorithms.
RAID? I have some systems with mirrors on both root and data volumes. On the data volumes the mirror was originally created on FreeBSD-9 and has seamlessly been brought forward to FreeBSD-13 so transitioned from FreeBSD native ZFS to OpenZFS.
 
compression is usually enabled by default on ZFS datasets
Apart from snapshots I would not be using most of these features. Compression allows for efficient use of disk space, which is a good thing. OTOH, it won't speed things up and in the long run the additional electricity cost of (de)compression may set you back more than a bigger disk just once. It all depends on your needs.
 
Compression also allows for more efficient use of IO bandwidth. The IO bandwidth to/from the device is fixed, SATA-3 is nominally 600 MB/s. If one takes say 1GB of data that compresses to 600MB you move 1GB of data over that IO pipe of 600MB. Yes there is a cost to compress and decompress on the CPU side, but given modern CPUs and RAM speeds, it is typically worth it.

As you say, "it all depends on your needs".
 
Apart from snapshots I would not be using most of these features. Compression allows for efficient use of disk space, which is a good thing. OTOH, it won't speed things up and in the long run the additional electricity cost of (de)compression may set you back more than a bigger disk just once. It all depends on your needs.
Does de-compression of a ZFS-compressed 5 GB dataset take hours on a Threadripper? What's the going rate for watt per hour in your country? A Threadripper requires at most 280 watt to run. The math is messed up, sorry to say.

SSDs cost around $50 USD per terabyte.

I pay like $0.13 per kilowatt-hour... 😂
 
Back
Top