ZFS People say that ZFS in FreeBSD is bloated and hacked in, what exactly do they mean?

Reading online I occasionally see people saying that ZFS was hacked in, isn't well integrated, adds excessive bloat to the OS, and other such things.

Usually it's just the statement and no explanation. I wondered if anyone could explain exactly what is meant by these things, and how true or false these assertions are?
 
That comes mostly from our valued "Linux users", right? Just let this gossip be simple and collect your own experiences!
 
In my experience, ZFS on FreeBSD seems to work better than ZFS in Linux, and ***MUCH MUCH*** better than using separate software RAID and advanced file systems on any OS. But that's just me. Disclaimer: I have never looked inside the FreeBSD kernel source code, nor inside the ZFS source code.

I would reply that the people who make these statements should give some explanation, show some data, explain their justification. it's hard to try to explain other people's reasoning.
 
Truth be told, ZFS is a large piece of kit. But I doubt it's "bloat", bloat infers there's a lot of useless code, it also infers you're force to have it, whether you want it or not[*]. ZFS is far from useless or forced upon you. As for being hacked in, isn't everything? ZFS does require the OpenSolaris ABI compatibility layer so it's not integrated into FreeBSD in the most strictest sense. In a way you could call that "hacked in". But loading ZFS as a kernel module on Linux would also be "hacked in" by those same standards.

So, I'd say, it's partly true but severely overstated in order to spread FUD by people that don't like ZFS.

[*] Think of the plethora of applications that come pre-installed on your average mobile phone. You don't use any of them and you can't even uninstall them. That's bloat.
 
That comes mostly from our valued "Linux users", right? Just let this gossip be simple and collect your own experiences!

ZFS in FreeBSD is very carefully integrated and lots of tools and mechanisms are aware of ZFS and use its capabilities. TBH, I've never really looked behind the curtains, but ZFS perfectly integrates in everyday-workflow and "just works™"
In comparison: ZoL still isn't even supported out-of-the-box - you have to build the kernel modules yourself. Yes, there is DKIM to automate that, but that breaks roughly 1 out of 3 times the modules have to be rebuild after kernel upgrades (which seem to happen every week nowadays on Linux...). At least this was the state of the union on debian up until ~mid-2016 when I stopped updating the last linux/ZFS system before it was replaced. Upgrading had to be done off-hours, the reboots were scary as hell and I kept at least 2 different full-backups of the OS-pool (yes, Linux /root on ZFS for some extra thrill) and a copy of a known-working GRUB partition around - and both were needed quite often...
After getting it to work you end up with an OS and tools that have absolutely no concept of the underlying filesystem and/or its capabilities - everything has to be done manually or glued together with self written scripts.
So take a guess on which solution is "hacked in"?

As for bloat:
Only to get redundancy, volume management and snapshots you need md-raid, LVM and the actual filesystem on top. LVMs snapshot capabilities compared to ZFS is like comparing a hand axe to an impact drill. Additionally, scaling out with LVM is just horrible - performance is dropping more and more with each modification to a group: We had a LVM group which after ~2 or 3 extensions and drive swaps was able to manage ~35MB/s from 6 SAS-drives before it was finally nuked... Snapshots weren't even possible at that point.
The same hardware now runs a healthy ZFS pool with no bandwidth problems at all...
So yeah, ZFS is definitely bloated...
 
  • Thanks
Reactions: Oko
I work with all of them, actually I will only really work with ubuntu on the linux side. So the things I don't like about the freebsd fork is that they don't have the cool sharenfs and sharesmb features. Also kind of missing the solid fc support.

To me the problem with the linux forks is the same problems freebsd had like 10 years ago... the train is moving so fast it's hard to be stable. I would say the freebsd is the most stable zfs out there and I work with linux, omni and oracle solaris.

The freebsd is the lightest and most robust.
 
So the things I don't like about the freebsd fork is that they don't have the cool sharenfs and sharesmb features

Where did this myth come from? I'm using ZFS properties to share various datasets via NFS here and at home...
FC is also working - I currently have 9 zvols connected to this machine via FC right now and running some illumos and FreeBSD VMs for testing on them. Have them also shared via iSCSI when I need the fiber uplink for something else. Both has actually nothing to do with ZFS as both is handled by their respective service and config file(s) (/etc/ctl.conf).
 
People always have to whine and moan about something, and this isn't much different I think.

I'm not a programmer so I can't comment on the actual state of the code but I seriously doubt that "bloat" is a good word for it. I think that some criticism could originate from the rather weird way in which the installer sets up a root ZFS environment. Because if you follow the installer you end up with a rather bizarre setup in which several filesystems end up doing nothing but act as placeholder (zroot/root/DEFAULT, so zroot/root does nothing) and worse: the moment you boot from a rescue CD and you try to import your pool in a regular way (so: # zpool import -fR /mnt zroot you will notice that none of your filesystems are accessible.

You'll also need to use # zfs mount zroot and/or # zfs mount zroot/root/DEFAULT because some 'smart people' thought it made sense to turn the canmount property off so that those filesystems don't auto-mount. An in my opinion braindead decision because the chances that someone will use a rescue cd in order to fix or adjust their system are much higher than the chances of someone actually using this design for what it was intended: sysutils/beadm. So: keeping multiple root filesystems around.

So yeah, in my opinion plenty of users who ran into this caveat of trying to access their ZFS system using a rescue CD, only to end up with a seemingly inaccessible system are bound to end up slightly negative, and quite frankly I can't blame 'm.

But just because the installer uses this (once again: in my opinion): stupid approach that doesn't mean that you're also bound to it. None of my (ZFS) servers suffer from this stupidity. Of course they were set up manually. Either fully by hand or I simply moved to the shell and performed the partitioning by hand (these days I tend to ignore the installer completely because it's actually faster to install a system without it).

I only worked with ZFS on 2 kinds of systems: Sun Solaris 10 (when the filesystem had just been released) and of course FreeBSD. Quite frankly I only experience a lot of improvements instead of bloat and other nastiness.
 
I guess people who say that are the same people who use Btrfs (the never finished fs) because they do not want to use an "outdated" one (ZFS). But now there is a new trending, it seems: bcachefs ("The COW filesystem for Linux that won't eat your data").

Lets see if that one will be finished before they create another one.
 
You'll also need to use # zfs mount zroot and/or # zfs mount zroot/root/DEFAULT because some 'smart people' thought it made sense to turn the canmount property off so that those filesystems don't auto-mount. An in my opinion braindead decision because the chances that someone will use a rescue cd in order to fix or adjust their system are much higher than the chances of someone actually using this design for what it was intended: sysutils/beadm. So: keeping multiple root filesystems around.

I keep seeing this complaint (from you), and I'm always curious what you have against beadm. It truly is a wonderful tool. Here's my workflow (mostly scripted these days) for upgrading a system via a boot environment:
  1. Build world/kernel in /usr/src for new system version / kernel config / what have you. (using WITH_META_MODE=1 in /etc/src-env.conf makes this fast for all but the most significant of updates.)
  2. beadm create new-system-tag
  3. beadm mount new-system-tag /tmp/newbe (or similar)
  4. start a jail newbe (configured to mount /tmp/newbe as /, and nullfs /usr/obj and /usr/src into /tmp/newbe/usr/{obj,src}) -- yes, there is some jail.conf / fstab.jailname setup to be done the first time to get this working.
  5. jexec csh in the newbe jail
  6. Do the install inside the jail. (cd /usr/src; mergmaster -pU; installs; mergemaster -iFU; make-delete-old{-libs}, anything else desired to tweak in /etc/*)
  7. exit csh, tear down the jail, and beadm umount newbe
  8. beadm activate newbe -- or reboot the system and select it from the loader menu. If something has gone off the rails, just select the old BE again.
Most of this is scripted (steps 2-5, step 7) into a ./do_install newbename script, and the aforementioned config files, so this (while it looks long) is very fast to actually do, and if something isn't right with the new kernel or user space, I can easily recover to the previously working state with just a reboot, much faster than any possible fix via a live CD (not to mention the process of getting a virtual live CD mounted on a remotely managed system.

There's even the ability zfsbootcfg(8) to set a one-time boot environment now, so you can do this on headless systems -- if the new config doesn't come up, just reboot and you're back to functional!

So yes, there is a little bit of a learning curve to boot environments, but they really are wonderful for managing system upgrades in a minimal-possible-downtime way.
 
A sysutils/beadm like will actually be integrated in the boot manager, as you can see in HERE.

I know. I'm looking forward to that; I'm hopeful its 'be jail' command will remove some of the configuration steps above. (Won't help me much at this point, but will be easier for others.)

BEs aren't going anywhere. They're just too useful, and a compelling differentiation vs. other OSes out there.
 
Have any of you tried setting up ZFS on Linux lately?

The particular steps depend on the specific distro family, but entails re-packaging the initrd.img file. That in its self is not so bad, but I found the systemd enabled RHEL/Fedora class to require an extra layer named Dracut (defined as event driven initramfs infrastructure). Talk about bloatware! The Dracut layer boots first, then initrd with ZFS; however Dracut does not terminate and stays resident to sort of referee or translate systemd calls.

If anything is going to make me hesitant of FS reliability, the Dracut middle-man meddling would be it.
 
Have any of you tried setting up ZFS on Linux lately?

The particular steps depend on the specific distro family, but entails re-packaging the initrd.img file. That in its self is not so bad, but I found the systemd enabled RHEL/Fedora class to require an extra layer named Dracut (defined as event driven initramfs infrastructure). Talk about bloatware! The Dracut layer boots first, then initrd with ZFS; however Dracut does not terminate and stays resident to sort of referee or translate systemd calls.

If anything is going to make me hesitant of FS reliability, the Dracut middle-man meddling would be it.
Alpine, the simplest and the easiest of all Linux distros, supports installation on ZFS root let alone ZFS storage space. It uses OpenRC for its init sys. It has Xen Dom0 second to none also. Alpine Linux is far simpler OS than FreeBSD. People are fooling themselves if they think that ZFS has to do anything with FreeBSD. ZFS is the native file system of Solaris. Few FreeBSD guys had enough economic intensive to port it to FreeBSD. It would be very easy for Oracle to bring ZFS to the status tier 1 file system for its "Oracle" (read Red Hat) Linux.

Code:
xen1:~# uname -a
Linux xen1.int.autonsys.com 4.9.32-0-hardened #1-Alpine SMP Fri Jun 16 12:20:58 GMT 2017 x86_64 Linux
 
I keep seeing this complaint (from you), and I'm always curious what you have against beadm.
Absolutely nothing.

My criticism isn't aimed towards beadm but more so towards the way in which a default ZFS pool gets set up. In specific the choice to turn off canmount so that the root filesystem doesn't automatically mount when you're importing your ZFS pool.

There is really little need for any of that.

Another concern I have is the rather arcane location of the root filesystem and the confusing that generates. It would be perfectly doable to simply mount zroot on / (which would also make it much more accessible whenever you need to use a rescue CD). Then add other root (/boot) environments whenever you need them. I could imagine zroot/beadm/<root systems>.Maybe mounted on /beadm/root/newkernel? I could even imagine beadm setting up this hierarchy as soon as it is installed.

So... it's not beadm I'm criticizing but rather the way in which ZFS gets set up by default.
 
Alpine, the simplest and the easiest of all Linux distros, supports installation on ZFS root let alone ZFS storage space. It uses OpenRC for its init sys. It has Xen Dom0 second to none also. Alpine Linux is far simpler OS than FreeBSD. People are fooling themselves if they think that ZFS has to do anything with FreeBSD. ZFS is the native file system of Solaris. Few FreeBSD guys had enough economic intensive to port it to FreeBSD. It would be very easy for Oracle to bring ZFS to the status tier 1 file system for its "Oracle" (read Red Hat) Linux.

Code:
xen1:~# uname -a
Linux xen1.int.autonsys.com 4.9.32-0-hardened #1-Alpine SMP Fri Jun 16 12:20:58 GMT 2017 x86_64 Linux

Everything has its pluses and minuses. Alpine sux with LDAP and NSS.
 
Everything has its pluses and minuses. Alpine sux with LDAP and NSS.
Alpine Linux uses uclibc which doesn't support NSS modules at all so you are quite right. I was not saying that people should start running to switch their FreeBSD based file servers to Alpine Linux. I was trying to make a point that Linux could be super simple (much simpler than FreeBSD) and that ZFS can be installed and run on such simple system (obviously once ZFS is installed on Alpine it is no longer simple). I personally use it for Xen Dom0. I always thought that porting Xen to FreeBSD would have being far better idea than starting bhyve from scratch.
 
Alpine Linux uses uclibc which doesn't support NSS modules at all so you are quite right. I was not saying that people should start running to switch their FreeBSD based file servers to Alpine Linux. I was trying to make a point that Linux could be super simple (much simpler than FreeBSD) and that ZFS can be installed and run on such simple system (obviously once ZFS is installed on Alpine it is no longer simple). I personally use it for Xen Dom0. I always thought that porting Xen to FreeBSD would have being far better idea than starting bhyve from scratch.

Similar to other (non systemd) linux distributions, Alpine is based on musl libc (not uclibc), and they intentionally avoid providing support for automatic loading of modules. For example, one can't even load X without specifying the required modules (vesa, glamoregl, etc) in a proper config file. Now, in the case of user databases they (musl developers) state that:
traditional implementations have NIS support hard-coded in along with flat files. musl does not support anything but flat files. The direction planned in musl is not to add in the bloat of additional backends or dynamically loading backends, but to offer a single protocol for communicating with a daemon that would serve as the backend
 
  • Thanks
Reactions: Oko
Back
Top