ZFS zfsboot - Is it possible to have zfsboot import/mount filesystems under the boot pool?

I've installed FreeBSD into a slice of a BSD disklabel partition, on an SSD in a laptop that uses BIOS/MBR firmware. After some consideration, I've installed the main FreeBSD boot loader to the SSD's start bytes - overwriting any thing Windows 10 had installed there.

I've installed zfsboot to the disklabel partition. The first stage of the boot loader boots successfully, then initializes zfsboot.

Due to what seems to be a side effect of the filesystem layout as I'd created under the root ZFS filesystem for this installation, zfsboot fails then, unable to continue the boot process.

In the zfsboot console, I'm presented with what looks like a prompt for selecting a boot filesystem. When I enter a question mark as the first character in the prompt and then enter some string of filesystem name characters after that, I can then browse the filesystems from the perspective of zfsboot. Browsing the filesystem, I see that a directory exists for 'boot' under the root filesystem, but what was installed to the boot partition is not visible there. It looks like the boot filesystem has not been mounted by zfsboot.

When I'd created the root filesystem for this installation, it was under an assumption that zfsboot would do something like a normal 'zpool import' on the pool, thus automatically mounting any mountable datasets under the filesystem's root dataset. After taking a look at that boot failure however, I'm not certain if zfsboot has done so.

Could it be that I've missed a parameter somewhere, such that would cause zfsboot to run something like 'zpool import?

As a workaround to this issue, I may try to reinstall the base system, kernel, and packages so far, but with a substantially different filesystem layout in the ZFS pool. I'm not entirely certain of how to put that layout together, however, compared to the filesystem layout that I'd tried. I believe I should at least try to put the /etc and /boot partitions on the same root filesystem, together with anything that would be needed for any host init processes, as at least up to the time when fstab will be processed and any automatic mount commands would run.

The following is the present layout for this ZFS pool, it was created with some hacks in BASH. The pool is mounted under an altroot '/mnt' here

Code:
[gimbal@tblk ~ ]$ zfs list -t filesystem -o name,canmount,mountpoint -r tpool/ROOT/freebsd

NAME                                      CANMOUNT  MOUNTPOINT
tpool/ROOT/freebsd                              on  /mnt
tpool/ROOT/freebsd/boot.orig                    on  /mnt/boot.orig
tpool/ROOT/freebsd/compat                       on  /mnt/compat
tpool/ROOT/freebsd/compat/linux                 on  /mnt/compat/linux
tpool/ROOT/freebsd/dev                          on  /mnt/dev
tpool/ROOT/freebsd/etc                          on  /mnt/etc
tpool/ROOT/freebsd/media                        on  /mnt/media
tpool/ROOT/freebsd/mnt                          on  /mnt/mnt
tpool/ROOT/freebsd/opt                          on  /mnt/opt
tpool/ROOT/freebsd/opt/local                    on  /mnt/opt/local
tpool/ROOT/freebsd/proc                         on  /mnt/proc
tpool/ROOT/freebsd/tmp                          on  /mnt/tmp
tpool/ROOT/freebsd/usr                          on  /mnt/usr
tpool/ROOT/freebsd/usr/local                    on  /mnt/usr/local
tpool/ROOT/freebsd/usr/local/etc                on  /mnt/usr/local/etc
tpool/ROOT/freebsd/var                          on  /mnt/var
tpool/ROOT/freebsd/var/audit                    on  /mnt/var/audit
tpool/ROOT/freebsd/var/cache                    on  /mnt/var/cache
tpool/ROOT/freebsd/var/cache/ccache             on  /mnt/var/cache/ccache
tpool/ROOT/freebsd/var/cache/squid              on  /mnt/var/cache/squid
tpool/ROOT/freebsd/var/crash                    on  /mnt/var/crash
tpool/ROOT/freebsd/var/db                       on  /mnt/var/db
tpool/ROOT/freebsd/var/db/entropy               on  /mnt/var/db/entropy
tpool/ROOT/freebsd/var/db/etcupdate             on  /mnt/var/db/etcupdate
tpool/ROOT/freebsd/var/db/freebsd-update        on  /mnt/var/db/freebsd-update
tpool/ROOT/freebsd/var/db/mysql                 on  /mnt/var/db/mysql
tpool/ROOT/freebsd/var/db/pkg                   on  /mnt/var/db/pkg
tpool/ROOT/freebsd/var/db/ports                 on  /mnt/var/db/ports
tpool/ROOT/freebsd/var/db/samba4                on  /mnt/var/db/samba4
tpool/ROOT/freebsd/var/empty                    on  /mnt/var/empty
tpool/ROOT/freebsd/var/log                      on  /mnt/var/log
tpool/ROOT/freebsd/var/mail                     on  /mnt/var/mail
tpool/ROOT/freebsd/var/run                      on  /mnt/var/run
tpool/ROOT/freebsd/var/run/user                 on  /mnt/var/run/user
tpool/ROOT/freebsd/var/service                  on  /mnt/var/service
tpool/ROOT/freebsd/var/spool                    on  /mnt/var/spool
tpool/ROOT/freebsd/var/tmp                      on  /mnt/var/tmp
After seeing how zfsboot handles this filesystem layout, I'm afraid it would not be a recommended layout however.

I believe I've discovered at least two possible workarounds for the behavior in zfsboot:

1) move the affected volume out of the way, e.g 'zfs rename tpool/ROOT/boot tpool/ROOT/boot.orig'
2) transfer any files from the initial filesystem into a directory on the root filesystem having the same name as the initial name of the affected volume
3) `ls -lod the_initial_filesystem` to check for anything that may need to be set with chflags on the new filesystem directory
4) reboot, and repeat until figuring out how many volumes need to be folded into the root filesystem here
5) optionally, delete the volume for the_initial_filesystem

Alternately: To reinstall, using a substantially different ZFS filesystem configuration.

If it may be possible to retain the root filesystem as installed so far: May there be any knob to toggle that would cause zfsboot to mount more filesystems before it tries to find the kernel?

On another laptop, I've installed FreeBSD as a dedicated OS under a gpt partition. That laptop has EFI firmware, and boots with gptzfsboot.

I plan on eventually using this other laptop as a dedicated FreeBSD machine too, once the installation can boot. After that, I can work on moving all my user files from the Windows 10 installation into something under a virtual machine system.

Personally, I'm at least glad to see that it's possible to boot at least to zfsboot on this machine. Concerned about whether it would be possible to boot ZFS under something not a primary MBR partition on this BIOS machine, I'd been considering trying to install Arch Linux alongside FreeBSD and Windows 10, to have Grub2 available there. That would entail a whole other lot of complexity however - there are certain ZFS pool features that aren't compatible with Grub. So, it would need separate boot and root ZFS pools as well as some new configuration tooling for keeping the multiboot system manageable under Grub. Then, there are particular quirks of ZFS on Linux and Systemd.

This issue with dataset mounting aside, zfsboot works, out of the box. I'm certain it'll work out even nicer once it boots the next stage of the bootloader process on this machine.
 
Simply don't try to be "too creative".
There is no reason to have /boot in a separate filesystem.

If you want to do that anyway, then you will need to get much better understanding of how ZFS boot works (I can see that currently yours is quite poor, no offence).
 
Simply don't try to be "too creative".
There is no reason to have /boot in a separate filesystem.

I'm afraid this may've been a hold over from an earlier idea for booting this system with Grub. Personally, I thought it might be useful furthermore as in order to keep a separate backup of the boot filesystem, with zfs send/zfs receive.

After seeing at least some of how ZFS on Linux handles the boot for ZFS on root, I'd thought it would not be a concern under FreeBSD - thus to the question of whether it's possible to have zfsboot actually import the root pool.

I've not had an opportunity to take a look at bectl yet, as to how it manages the boot system. Perhaps it may serve to simplify some things.

If I was to try to boot from ZFS with Grub2, it appears that it would need each of a separate boot and root pool, or a root pool limited to the zpool feature set supported in Grub. Thus, in order to use a full feature set in the zpool for the root filesystem, it would need a separate boot pool. This could potentially be of a concern if one was to install a multiboot ZFS system, such as with any two or more of Linux, FreeBSD, and IllumOS on the same set of ZFS pools.

The following is an excerpt of /usr/share/zfs/compatibility.d/grub2 from an Arch Linux system (grub-libzfs 2.06 from Arch AUR)

Code:
# Features which are supported by GRUB2
async_destroy
bookmarks
embedded_data
empty_bpobj
enabled_txg
extensible_dataset
filesystem_limits
hole_birth
large_blocks
lz4_compress
spacemap_histogram


To my understanding, any ZFS pool created such that Grub must access the pool must be created with only -- as at most -- with those supported features enabled, such as under 'zpool create -d.' Otherwise, Grub may not be able to locate or load any files on the boot filesystem. Once it's handed over to the OS kernel however, then any zpool features it supports could be used on the pool for the root filesystems. Presumably, the grub configuration itself would need to be located on a similarly Grub-compatible ZFS pool, or some non-ZFS filesystem.

I believe grub supports checksum={sha256,sha512}. It does not appear to support skein checksums, as a feature or option

Using the FreeBSD bootloader throughout, this may not be a concern. Sure, it would be absent of the simple nicety of the user interface in Grub. It would also be without the additional complexity for the bootloader configuration and bootloader maintenance (in Linux), to simply use the FreeBSD bootloader throughout, when booting FreeBSD
 
  1. lsblk
  2. freebsd-version -kru ; uname -aKU
  3. zfs version
Sure, but I don't have lsblk installed on the machine

Code:
#---- gpart show ada0
=>        63  1953525105  ada0  MBR  (932G)
          63        1985        - free -  (993K)
        2048      102400     1  ntfs  (50M)
      104448   902215680     2  ntfs  (430G)
   902320128        4096        - free -  (2.0M)
   902324224  1050165248     3  freebsd  [active]  (501G)
  1952489472        1024        - free -  (512K)
  1952490496     1034240     4  !39  (505M)
  1953524736         432        - free -  (216K)

#---- gpart show ada0s3
=>         0  1050165248  ada0s3  BSD  (501G)
           0        8192          - free -  (4.0M)
        8192    25157632       1  freebsd-swap  (12G)
    25165824        1032          - free -  (516K)
    25166856  1024998392       2  freebsd-zfs  (489G)

#---- freebsd-version -kru
12.3-STABLE
12.3-STABLE
12.3-STABLE
#---- uname -aKU
FreeBSD tblk.cloud.thinkum.space 12.3-STABLE FreeBSD 12.3-STABLE stable/12-n1855-ce99de0241e RIPARIAN  amd64 1203505 1203505

zfsboot was installed to ada0s3. The initial FreeBSD bootloader, boot0, was installed to ada0

I'm not sure where 'zfs version' would be from
Code:
$ zpool get version tpool
NAME   PROPERTY  VALUE    SOURCE
tpool  version   -        default

The ZFS pool was created with ZFS from the base system in this version of FreeBSD. It has all features enabled for that ZFS version, under this build of FreeBSD 12.3 from stable/12.

For the zpool itself:

Code:
$ zdb -Ce tpool

MOS Configuration:
        version: 5000
        name: 'tpool'
        state: 0
        txg: 14093
        pool_guid: 17267855490493259123
        hostid: 3225233781
        hostname: 'tblk.cloud.thinkum.space'
        com.delphix:has_per_vdev_zaps
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 17267855490493259123
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 178890040050964708
                path: '/dev/label/TPOOL'
                phys_path: 'id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00/s3/b'
                whole_disk: 1
                metaslab_array: 131
                metaslab_shift: 32
                ashift: 12
                asize: 524794200064
                is_log: 0
                create_txg: 4
                com.delphix:vdev_zap_leaf: 129
                com.delphix:vdev_zap_top: 130
        features_for_read:
            com.delphix:embedded_data
            com.delphix:hole_birth

Version codes from sysctl, on the machine:

Code:
$ sysctl -a vfs.zfs.version
vfs.zfs.version.zpl: 5
vfs.zfs.version.spa: 5000
vfs.zfs.version.acl: 1
vfs.zfs.version.ioctl: 7


I've booted the machine from an external disk with this version installed. The zpool was created with the FreeBSD on the boot disk. The same build has been installed to the ZFS pool illustrated. zfsboot and boot0 were installed onto ada0s3 and ada0 from this version.

If it would be of interest, I can share the KERNCONF and src.conf etc. It's not an entirely succinct configuration, insofar as for the toolchain for the build, nothing too extraordinary otherwise though. The build was produced with a version of LLVM from ports, under cross-compiler bindings (XCC and others) although for the same host and target architectures (amd64). It was not a meta-mode/dirdeps build, just using the 'buildworld' and 'buildkernel' targets under local build config.

This build was produced with WITH_CTF, not a lot of debug info otherwise however. If the debug info could affect how it loads the root pool, can set up a new build.
 
In update, I've finally worked out some issues about the glx configuration on the machine, having figured out a way to make sure the Mesa GLX libraries are used - with include /usr/local/etc/libmap.d/mesa.conf as the single entry in /etc/libmap.conf. Will try working out this ZFS root filesystem config, shortly

Insofar as for how zfsboot handles the root pool, I plan on reinstalling after rolling back those root filesystem datasets to a snapshot made just after they were created. Subsequently, removing some of the datasets - e.g /etc, and maybe some datasets mounting under /var - such that those may be available to the boot loader and the kernel, without any additional mount calls, then toggling the datasets for intermediate directories e.g /var, /usr , to canmount=off.

Once the filesystem is sufficiently overoptimized lol, the reinstallation could be managed with some tooling using /usr/src/release/Makefile thus obviating the need for an NFS mount for it. Hopefully it'll work out then, once the kernel and base system parts are installed on the root filesystem.

I've managed to cobble some BASH scripting together for handling the creation of ZFS dataset and dataset properties, it's made that much of the process fairly simple, up to the point of debugging the config lol. Though it's been useful locally and it does not require anything form ports other than BASH or ZSH - it uses arrays - it's not something I'm certain of sharing as a source project. Its source code is really not succinct, tbt. I hope that I may be able to work out a more succinct configuration syntax and something like an API, with something in Ruby, once figuring out the FreeBSD boot configuration for this machine.

For as complex a thing as it could be to manage a multiboot configuration, with Grub and any second or third OS, hopefully the maintenance scripting won't add to that complexity LoL. If I may've understood any of the issue up to this point, perhaps it would not be a lot different if using Grub instead of boot0.

I suppose it's not as simple as just installing /boot on the boot pool, at least at here.
 
Simply don't try to be "too creative".
There is no reason to have /boot in a separate filesystem.

If you want to do that anyway, then you will need to get much better understanding of how ZFS boot works (I can see that currently yours is quite poor, no offence).
I have root on zfs and /boot in a separate ufs filesystem in order to use linux-grub for multibooting linux/freebsd. But there was a small learning curve to overcome.
 
Back
Top