Setup with root on ZFS, booting considerations

Hello to all,

my first post here. I hope following question won't be in inappropriate forum section.

I'm thinking about trying FreeBSD for small four drive NAS and use advantages of ZFS filesystem. Previously I've done bunch of similar NASes with Linux and dmraid. There I typically sliced all hard drives in a way, that start (eg. 10GB) of each drive in array was dedicated to system partitions and the rest was for user data. Then /boot partition, loader (and possibly also EFI FAT32 partition on some systems) were located at permanently connected USB flash drive, which was then set as a bootable in BIOS/EFI.
Such setup had two advantages, first it wasn't prone to hardware/BIOS related issues with booting when some drive failed (eg. stalled boot at half dead drive, which has highest boot priority) and second it wasn't necessary to do some manual syncing of loader or EFI partitions after each update of involved packages or kernel. Of course the flash drive is then also a single point of failure, but usually I haven't had any issues with that over years (seldom writes there, so very low wearout), plus you can prepare backup image if necessary.
I'm aware the first mentioned advantage is rather unrelated to used OS and depends just on used hardware, but what about the second thing? How does that work on FreeBSD with ZFS root?
For example if you use four drive NAS without additional USB drive and use guided installation for root on ZFS in bsdinstall, say with RAID-10 (two mirror vdevs). Will it install boot loader in a way, that each of those four drives will be bootable? Possibly does that survive system updates and release upgrades, so everything will be in sync?

Of course, I plan to do some further testing, just that topic came to my mind. I will be happy for comments with your experience.

Thanks,

Michal
 
Hi Graham, thank you for the welcome and provided links, those are very useful.

I definitely need to read through those manual pages mentioned in the first linked bug. Funny from Linux world I'm not so used to read manpages for generic concepts or its descriptions. I've used FreeBSD as my first UNIX like system many years ago right after high school, but back then there was no EFI, ZFS.. Big topic then was forthcoming move from 4.x to 5 release, SMP, I recall hacking some GEOM stuff for software RAID and then also my first look to boot loader (ahh Forth, WTH :)).

Anyway back to the topic, it looks like that USB drive for loader and EFI still looks like a sane idea. I need to study about manual installation FreeBSD on ZFS root with loader on USB drive, because I assume bsdinstall don't provide such option.
Then I will prepare some suitable virtual test environment and try it out.

Thanks again,

Michal
 
If you choose EFI + CSM boot at setup, the installer will create a EFI partition and put the MBR boot loader on every disk.
You just have to make sure to update both at least on major release upgrades.

As for the bug linked by grahamperrin:
TBH - for production/critical systems I'd still stay with the 12.x-RELEASE branch. There are also still some open problems that seemingly were introduced with the switch to the ZoL codebase on 13.0. So if you just want/have to have a system that 'just works'™ in that regard, stay with 12.x (which still uses the FreeBSD/illumos ZFS fork/codebase).
 
Thanks, sko you've been posting about ZFS for longer than me; I trust your advice.

… still some open problems that seemingly were introduced with the switch to the ZoL codebase on 13.0. …

FWIW, ZFS fs@ FreeBSD bug reports that relate to 12.x: <https://bugs.freebsd.org/bugzilla/b...ABLE&version=12.3-RELEASE&version=12.3-STABLE>



I'm knowingly bugged by just two things with OpenZFS in FreeBSD 14.0-CURRENT. Both are close to neigligible. I assume that both are reproducible with:
  • FreeBSD 12.3-RELEASE
  • FreeBSD 13.0-RELEASE
  • FreeBSD 13.1-PRERELEASE
zpool-iostat(8): unreasonably/impossibly high alloc and free measurements for two cache devices (simple USB thumb drives) · Issue #12779 · openzfs/zfs

FreeBSD: zfskeys_enable: encryption key not loaded for a file system within a pool that imports automatically at startup · Issue #13038 · openzfs/zfs

12779 is rare (like, once in two months). Both issues are easily worked around.
 
For this and other reasons, I made the Skunk Cloner.

It does not have the issue grahamperrin mentioned. If you choose to clone to a multi-disk mirror, each drive gets its boot code (both CSM and UEFI). So this issue alone is imho not a reason not to use it with 13. It does not add the EFI partition in the fstab (why should this be necessary at all, as the bootstrap image is loaded just once by the UEFI BIOS).
Ofc, there are other things that could make sensible to wait until after 13.1 has settled for a while, before using it for server.

You can clone from and to drives/USB sticks/memory cards. So you can even use a SD card, clone your fully configured desktop system onto it, boot the card on your laptop and clone the system from that onto the laptops' SSD, saving a lot of installation and configuration time. Very convenient for example if you need to easily and quickly set up a number of office computers without using monsters like Puppet, or stash away a ready-to-use backup clone of your server.
 
It does not add the EFI partition in the fstab (why should this be necessary at all, as the bootstrap image is loaded just once by the UEFI BIOS).
The only reason I can think of is to update it or tweak settings; I'd put it in with "noauto" so that you need to manually mount it and gives it a bit of separation.
 
IIRC there are still unresolved performance issues that might occur on 13.0 (I think it was mainly RAIDZ-related). This was also discussed here on the forum but I couldn't dig up the specific thread from a quick search...

TBH - for the fact alone that with switching to ZoL, OpenZFS it is a "linux-centric" codebase now (or at least with lots of commits from the linux folks) and given the (compared to BSD/illumos) rather poor code quality standards and usually complete ignorance for other OSes ("works for me, don't care about your machine") in linux-centric projects, I'm VERY cautious with letting ZoL on production machines until the dust has completely settled and it has proven to work this way without breaking things.

I'm running 13.0-RELEASE on 3 of my client machines and have stumbled over a few issues at the beginning (some being directly or indirectly ZFS-related, e.g. changed syntax/behaviour of the zfs command) which have been mostly addressed since (except for this bug), but I'll still hold on for at least the 13.1-RELEASE (+some weeks...) until I will upgrade our critical hosts. It may be just a gut feeling, but over the last ~15 years as a sysadmin it has usually proven to be correct to follow that feeling.
 
[...] with the switch to the ZoL codebase on 13.0. So if you just want/have to have a system that 'just works'™ in that regard, stay with 12.x (which still uses the FreeBSD/illumos ZFS fork/codebase).


The OpenZFS initiative functions as a central hub for all OpenZFS[2] development:
In the open source space, ZFS has gone through several transitions as to the Code Flow. In the beginning llumos, forked from OpenSolaris, was central; now both Linux and FreeBSD are supported from the same "main level" repository. Matt Ahrens describes the Code Flow of OpenZFS from 2013-2020 on slides 11-13 or Code Flow:

GitHub - OpenZFS 2.0.0 - Brian Behlendorf - 30 Nov 2020:
• Unified code base and documentation - The ZFS on Linux project has been renamed OpenZFS! Both Linux and FreeBSD are now supported from the same repository making all of the OpenZFS features available on both platforms. #8987
  • Linux: compatible with 3.10 - 5.9 kernels
  • FreeBSD: Release 12.2, stable/12, 13.0 (HEAD)
OpenZFS intends to include OS X as of OpenZFS 3.0 in the "main level" repository (State of OpenZFS - OpenZFS Developer Summit 2021 (slides - video).


ZFS is included in FreeBSD out of the box: naturally :). FreeBSD can use a different OpenZFS kernel module by specifying that in /boot/loader.conf (see: OpenZFS-Home » Getting Started » FreeBSD). As for OpenZFS development versions, you can also use sysutils/openzfs-kmod (kernel module) & sysutils/openzfs (userland).

In addition to the ZFS information in the FreeBSD Handbook, you will find extensive reference information and documentation on the OpenZFS website:
___
[1] not disputing any particular considerations about waiting for 13.1-RELEASE
[2] Note: (Open)ZFS in FreeBSD is not Oracle ZFS; from Wikipedia-ZFS:
According to Matt Ahrens, one of the main architects of ZFS, over 50% of the original OpenSolaris ZFS code has been replaced in OpenZFS with community contributions as of 2019, making “Oracle ZFS” and “OpenZFS” politically and technologically incompatible.

___
edit [dec 19]: added OpenZFS article; [dec 11]: minor text change
 
Last edited:
Back
Top