ZFS Questions about ZFS on FreeBSD and Linux

Apart from all the technical questions and quality of engineering: btrfs is just awkward when you're coming from ZFS. The snapshot handling, the property inheritance, the information and debugging tools, the navigational concepts. Trying to accomplish anything with btrfs, it just feels primitive and unwieldy to me—like trying to repair a bicycle, but all you get is a hammer.
 
Linus doesn't want ZFS code inside the Linux kernel code itself, because they can't claim ownership on the code. Rhis is why they are enhancing BTRFS. However, you can use it as a module, like Nvidia proprietary drivers. Whether it's easy to do due to syscalls and APIs is a different story.
The 2 parts I bolded are basically what I was saying. Linux kernel for a long time has had the idea of a "tainted" kernel module.
At it's core, a module that does not say it's GPL, results in a tainted kernel.
A tainted module is only allowed to use a subset of the kernel module API; that has been the rule for a long time.
The ZFS on Linux issue was a previous kernel API was moved out of general use and if the module was not GPL, it could not use it or even
see it. LKML at the time had posts from kernel people saying effectively "Eff em." and the ZoL team couldn't even get an honest discussion about the problem.

I have no opinion on BTRFS/XFS/anything but extNfs on Linux, other than:
A reliable stable performing filesystem is non trivial to create and implement and takes a lot of "air time" to prove it out.
 
Actually no. BTRFS have an ok performance and a good footprint of cpu/ram usage. The problem is that isn't reliable at all, and I'm not talking just about RAID5/6, I see scenarios that was unacceptable to production, at least for me (e.g mirrors that wasn't "mirrored" properly, borked metadata mirror with an idle machine, and the list goes on).
"is not reliable at all" is equal to garbage. It is garbage, period.

Furthermore there are well known cases around where you specifically do not want to use Btrfs at all, because it would greatly decrease your performance!

One popular example for that is to put the data storage of a modern RDBMS on it, like for instance Postgres. According to the well known Postgres consultant company 2nd Quadrant ZFS and Postgres do perform really well with ZFS' COW features being turned on. So snaphots and all the advanced stuff.

Postgres on Btrfs however is different. Btrfs will make your database slow as molasses, and there are benchmarks around which clearly indicate that. This is true until today. The first recommendation is always don't put your database on Btrfs, the second one then if you still want to disable the COW features on these files. Which basically contradicts the whole reason why you use a COW file system in the first place, and means nothing more nothing less that putting it on Ext4 or Xfs suddenly becomes an option again, because if you have to disable COW totally to get decent enough performance out of it you can also just use the better performing file systems instead.

The other area are VM images, and running your VMs from a Btrfs file system. Again this will be slow like a snail on Btrfs, and the official recommendation is again to disable COW totally on these images.

Both use cases will suffer from big file fragmentation, which in turn will lead to read performance degradation. Btrfs simply does not work will with any files which will have a lot of random writes. Source: https://btrfs.wiki.kernel.org/index.php/Gotchas#Fragmentation

Heck, according to the official FAQ it even struggles with Firefox and Chromium profiles.
 
Well, you told an interesting story, but where did you get that from? If there's a Wikipedia article about that, link to that... If there was an announcement on the project page, link to that. Links give credibility to the stories shared.
Well, you want examples, ok. CoreOS, which was a lightweight Linux distribution for containerization until its discontinuation in 2019, moved in 2014 to Btrfs as their default file system.

That went so well, that the support forums of that distribution became riddled with complaints about Btrfs problems and ended in a petition to dump it totally. Which they did then in 2015. To put that into perspective: Chris Mason, author of Btrfs, declared it as stable back in 2014 aside RAID5/6, so the same year when CoreOS made that move.

You can read the original discussion about it here, also indicating the vast amount of problems and errors: https://groups.google.com/g/coreos-dev/c/NDEOXchAbuU

The thing is that most major Linux distributions moved away from Btrfs. Redhat, so IBM, deprecated it. I mean it makes sense, since they've got Dave Chinner and therefore much Xfs knowledge in their house.

So far many major distributions considered making the move to Btrfs as default file system for new installations. Fedora delayed that move for years, should be ready by now and SuSE just ships it by now. But these are the exceptions, all other distribution flock around mostly ext4, some Xfs as default file system.

Maybe you've also heared about QNAP, who are mostly producing SME NAS devices? They were so fed up with Btrfs, that they've put up an own dedicated page named "Why doesn't QNAP NAS use the Btrfs file system?" together. I am pretty sure that they do know well enough what they are doing, and why they refuse to use/ship it. It's of course also a lot of marketing blabla, but they also have a COW file system now: ZFS. Like most Linux based NAS vendors nowadays.

For quite more balanced fun just have a look at the wiki page about Btrfs over at Debian. It gives a very long list of reasons why not to use Btrfs which is also quite up to date.
 
When it comes to filesystems, reliability is vastly more important than performance, indeed the latter comes even after functionality.

Short version: zfs on BSD, at least up to 11, works very fine.
Btrfs, dragonfly, zfs-on-Linux not so much => discarded
 
It depends on how important the data is to keep, and how large the damages to pay in the event of a problem.
Anything is fine for home use, even FAT16.
With a bank's data, the matter is much more delicate.
 
What are the RAM requirements for ZFS? Does it depend on …

Food for thought (not a recommendation), here's ZFS enabled in single user mode on a machine with 1 GB memory:

ZFS enabled in single user mode.png 50 M.png

– between 46 and 50 M used by the OS :cool:

After using a find command to walk / the amount grew to 132 M.

… heard that ZFS needs at least 1GB of RAM to function.
Is that still the case for 13.0 …

Knocking about somewhere I have a shot or two of KDE Plasma running Firefox, LibreOffice, GIMP and a few other applications in a virtual machine with around 1 GB memory.

I might never rediscover those shots (limitations of XenForo). Instead, here's


Here's a shot of Firefox ESR and LibreOffice with x11-wm/twm in a different machine with ZFS and just 1 GB memory:

buba and kiki.png


Playback of <https://www.ted.com/talks/james_geary_metaphorically_speaking?language=nl>: faultless.

ARC

… I hear from other people that the situation is much improved with 13.0. Still a good idea to limit ARC to remove any potential contention. …

<https://old.reddit.com/comments/pvsu2w/-/hecksww/?context=1> re:
  • vfs.zfs.arc.sys_free
  • vfs.zfs.arc_free_target

L2ARC


<https://forums.FreeBSD.org/threads/zfs-faq.74965/post-532161>
 
Last edited:
a shot or two of KDE Plasma running Firefox, LibreOffice, GIMP and a few other applications in a virtual machine with around 1 GB memory.

I might never rediscover those shots

Found. My late August screenshot was of CultBSD, which used UFS at the time. (Not ZFS, sorry. My memory was mistaken.)

The October shots above are truly ZFS (OpenZFS in FreeBSD 13.0-release-p4) in a machine with just 1 GB memory.
 
Back
Top