ZFS on Linux

Hello
some guys are concerned at work about stability, reliability of ZFS on Linux.
What's your take my dear FreeBSD gurus?
 
I dual boot freesd-on-zfs , gentoo-linux-on-ext4.
When i boot gentoo-linux i mount the freebsd-zpools & this works fine and perfect.
[ Personally i would not put the linux-kernel on zfs. This can be painfull booting ].
But zfs works fine as out-of-linux-kernel-source-tree.
Openzfs shares a source-tree between freebsd & linux , so this guaranteed to work.
 
I would be more concerned with the stability of the user land than ZFS here. How long is your installation going to run?
 
I would be more concerned with the stability of the user land than ZFS here. How long is your installation going to run?

this is about an installation on vessels (ships) traditionally aimed to run at about 10 years. Now we are moving to proxmox virtualization and those guys fully support and endorse ZFS.
However some ppl in the company seem concerned since their expertise lies on EXT4/ mdadm, etc.

I use ZFS in my current FreeBSD machine (13.1-RELEASE) and its many levels superior to anything I had worked with in the past.
 
Example.
With zfs i take every 15 minutes an incremental snapshot of my user-home-directory.
So i never lose data older then 15 minutes or can restore it to any place each 15 minutes before.
Note: JFS,XFS&EXT4 are also not bad filesystems. Personally i find zfs superior.
 
One fundamental difference between ZFS on Linux and ZFS as part of FreeBSD is that the ZFS part of FreeBSD has its ZFS development in sync with its kernel development. That is one consequence of having a FreeBSD that is developed as a whole where a base install is a complete OS and a base userland that is being achieved by one team. Linux is just a kernel and ZFS is denied access to a shared kernel development process (ask Linus for reasons o.a. such as the stated differences in licence models). Because of the ZFS kernel module on Linux it is dependant on its kernel ABI and therefore the OpenZFS team has to play catch-up when the Linux kernel ABI changes. Linux distros have to combine a chosen Linux kernel and an accompanying ZFS.
 
The same thing can be seen with compilers.
linux goes good with gcc,
while freebsd goes good with clang.
This is related to the license.
 
I've been using ZFS on my Arch desktop for a little over two years I think.
Generally everything has been fine, current setup is XFS on a NVMe for boot, one zpool made up of two SSDs (mirrored) for home directories, a second zpool made up of two larger HDDs (also mirrored) for media files.
I've been using this setup for about a year and a half.

Before that I was using the two SSDs as a boot disk as well as home directories, the OS partition on each was part of a zpool which mirrored both partitions. I'm pretty sure back then I used systemd-boot to boot into Arch installed on ZFS.

What issues have I had? Maybe twice the ZFS packages have changed names and caused some weirdness - ZFS is not "built into" Arch in the same way as it is in something like Ubuntu, so that may not happen there.
Oh, and once I went to upgrade the kernel and the ZFS kernel modules weren't ready yet - I've had this problem way more with my graphics driver. Again, probably not so much of an issue on the more "managed" distros.

Why did I stop using ZFS on root? Mainly because it didn't have the same advantages as ZFS on root on FreeBSD (e.g. no boot environments), I was afraid it might break (not that anything ever cropped up to suggest it would), and I was looking to make my setup a little more streamlined - no backups/snapshots of the root disk, if it becomes borked just reinstall + run ansible to set everything back up.

Overall, ZFS on Linux has been fine. I do similarly to Alain De Vos where my home directory gets snapshotted every 5 minutes and other file systems get hourly snapshots.
 
I found this but the argument is not technical,

Thorvalds refers explicit to Oracle. Which is his right.
But even without oracle zfs will continue to live. Because it contains some good ideas.
 
My opinions, take for what it's worth.

The historical "Linux" stance on ZFS has revolved around the license and quite a bit of "not invented here".
The license aspect means it will never be a "proper" filesystem in the kernel on Linux, so ZFS has to use the publicly available module interfaces.
OpenZFS is can't be claimed by Oracle (I'm not a lawyer so don't take this as gospel), so any arguments about Oracle don't really hold weight.

From a technical standpoint, OpenZFS on linux is the same as what FreeBSD 13.x on up uses. I have not had any issues in daily use with any version of ZFS on FreeBSD.

In your specific use case there's not enough to say which filesystem would be better.
 
Additionally, when reading about ZFS and Linux, please be aware of the historic development and where ZFS—as part of the OpenZFS initiative—is now: OpenZFS & its code base history. Perhaps somewhat confusing ZoL (as an abbreviation for ZFS on Linux) is no more. As of FreeBSD 13.0 (I'm unfamiliar with any specific Linux distribution) ZFS means OpenZFS. Linux and FreeBSD indeed now rely on the same unified OpenZFS code repository and within the OpenZFS initiative both are supported as equal citizens.
 
this is about an installation on vessels (ships) traditionally aimed to run at about 10 years. Now we are moving to proxmox virtualization and those guys fully support and endorse ZFS.
One argument I have is that keeping a system running, updated, patched for that time is critical. Boot environments are a selling point here. Snapshots as well. Minimal downtime for botched updates. I ean we all should have heard the "success" the US navy had with NT, yes?
 
My 2c: this is way too specific question and requires background information about the $job and what task it's going to do. Is it actually being shipped (pun intended) on ships at sea? If so I'd say simpler the better. I have no idea what is the upgrade strategy on those devices. From personal experience many specific devices are usually set and forget.

Are those only VMs or physical HW? I'd stick to ext4 in either case probably. Given you have robust recovery solution at place (such as rear,etc.).
 
ZFS works well on linux; sometimes the packaging (getting kmods updated for the latest kernel) trips up smooth upgrades, but that’s the worst I’ve encountered in many years of use.

If you need to run Linux, and you’re comfortable with (and would like the benefits of) ZFS, there’s no reason to avoid it.
 
On should make a clear distinction between boot-on-zfs & root-on-zfs.
I don't consider linux stable enough for boot-on-zfs, i.e. the kernel itself on zfs.
I prefer ext4 or ext2 for /boot
For root, old jfs,ext4,xfs,zfs are good options.

That is for linux, but even on this freebsd-desktop on which i write this message the bootloader is on ufs & root filesystem is on zfs. Having the bootloader on a simple filesystem (ufs) makes it more robust I think. This is my personal take but mileage may vary.
 
  • Thanks
Reactions: dnb
My opinions, take for what it's worth.

The historical "Linux" stance on ZFS has revolved around the license and quite a bit of "not invented here".
The license aspect means it will never be a "proper" filesystem in the kernel on Linux, so ZFS has to use the publicly available module interfaces.
OpenZFS is can't be claimed by Oracle (I'm not a lawyer so don't take this as gospel), so any arguments about Oracle don't really hold weight.

From a technical standpoint, OpenZFS on linux is the same as what FreeBSD 13.x on up uses. I have not had any issues in daily use with any version of ZFS on FreeBSD.

In your specific use case there's not enough to say which filesystem would be better.

ok, given data :
a) Linus said in 2020 ZFS is bad dont use, the average linux person will do as advised
b) Proxmox officially provide/support ONLY one FS : ZFS
c) We will have lousy net connectivity to the server

so, do we placate our linux ppl and try a custom installation on EXT4 ? or do we placate the proxmox ppl and try the default ZFS ?
 
One argument I have is that keeping a system running, updated, patched for that time is critical. Boot environments are a selling point here. Snapshots as well. Minimal downtime for botched updates. I ean we all should have heard the "success" the US navy had with NT, yes?
our current images run since 2005... unpatched , postgresql 8.4, I dont even remember debian versions or kernels. Those are not connected to the net, so the no1 aim is to just work. No corruption. Recovery is not so cheap in terms of time. Anyways, all those aspects are subject to redesign. We are in the middle of this huge upgrade.
 
Back
Top