Does loader.efi support booting ZFS on a different whole disk?

At least for my personal taste, without trying to figure out "why would I do that?" I'm looking for a way to tweak Installing FreeBSD Root on ZFS using GPT by giving the ZFS root a whole disk instead of just a partition. ¹

I had no problems setting the swap partition on another disk. But when it comes to the EFI partition that looks not so straightforward, well, at least under VirtualBox 6.1.50 (I know, I know it's old!).

So:
  1. Could I be facing a VirtualBox 6.1.50 limitation as its UEFI/EFI support is "experimental" ?
  2. Would there be some other sort of limitation regarding the controller (IDE, SCSI, …) ? ²
  3. Would it be that this desired scenario of mine isn't a FreeBSD supported option ?
  4. If none of the above is a issue, can someone, please, shed light on the steps to achieve it ?
Thank you!


¹ I'm on FreeBSD 14.2-RELEASE.
² If I choose EFI for a VirtualBox 6.1.50 FreeBSD guest I can only successfully boot if the ISO is on a IDE controller.
 
On a boot disk you have to have a partition into which the FreeBSD bootloader can be stored. So you can't give the whole disk to ZFS.
 
On a boot disk you have to have a partition into which the FreeBSD bootloader can be stored. So you can't give the whole disk to ZFS.

So it looks like a "boot disk" can't only contain the ESP (efi) partition, but it must also contain a freebsd-zfs partition.

Meanwhile before your reply (thanks) I was browsing loader.efi(8) and was in the hope there were a way by setting some sort of "environment variable" such as "rootdev=...".
 
Your assertions ignore UFS, but that's just a quibble.

The behaviour of the UEFI Boot Manager, which is implemented in firmware, can be modified by changing some NVRAM parameters, like boot order.

What is a UEFI boot disk? I think it's a "disk" with a GPT partition table where the first partition contains an EFI partition, and that partition contains the boot loader, loader.efi(8). That EFI partition must exist in order for loader.efi to be present. That takes us back to my original post.
 
Your assertions ignore UFS, but that's just a quibble.

The behaviour of the UEFI Boot Manager, which is implemented in firmware, can be modified by changing some NVRAM parameters, like boot order.

What is a UEFI boot disk? I think it's a "disk" with a GPT partition table where the first partition contains an EFI partition, and that partition contains the boot loader, loader.efi(8). That EFI partition must exist in order for loader.efi to be present. That takes us back to my original post.

Sure I'm focusing on ZFS only and I'm aligned to your thinking of a "UEFI boot disk".

Of course, "up to this point" one could only create another (freebsd-zfs) partition on that same disk for the ZFS root.

But suppose I leave the "UEFI boot disk" (let's call it disk0) with just the EFI partition. Now also suppose I use another disk (let's call it disk1) and create the ZFS root pool on it by giving it the whole disk1 device (not just a 'freebsd-zfs' partition). The intended 🤔 question is: Is there a way of telling loader.efi(8) to continue/look for the next phase of loading from/at disk1?
 
boot1.efi() contains a few interesting statements (in reverse order):

boot1.efi uses the following sequence to determine the root file system
for booting:

• If ZFS is configured, boot1.efi will search the for zpools that are
bootable, preferring the zpool on the boot device over the others.
Before looking for the boot device, boot1.efi does the following
initialization
[...]
• Discovers all possible block devices on the system.

• Initializes all file system modules to read files from those devices
boot1.efi has been deprecated and will be removed from a future release.
loader.efi(8) handles all its former use cases with more flexibility.

There might be a path that could lead to something... ;)
 
As can be seen on the following snapshot, this is what happens when the VirtualBox 6.1.50 EFI firmware calls the FreeBSD EFI loader (loader.efi) where the "UEFI boot disk" only contains an EFI partition on disk nda0 and where I have configured an environment variable rootdev=nda1 similarly to what's prescribed in loader.efi(8):

EFI firmware -> EFI loader


Of course my setting of "rootdev=" wasn't acknowledged and I'll have to figure out why. But the snapshot above shows that loader.efi was successfully activated by the firmware and could read my environment variable (although its value was "meaningless" at this moment as I'm attempting to learn more about all this).

Of course nda1 was an initial guess attempting to tell loader.efi to look for the ZFS pool on that whole nda1 device. I haven't tested yet if by creating a freebsd-zfs partition for the ZFS pool on nda1 the loader will finally be able to discover the ZFS pool; but then I would have failed attempting to get rid of partitions on the ZFS pool.

As per the image above the loader gave up upon failing to find /boot/lua/loader.lua and I'm now wondering if this diagnostic would mean that this file should be added to the EFI partition on nda0 or is precisely what the loader was trying to find on rootdev (which it could not understand my wrong or unsupported setting) in order to continue the process.
 
Indeed boot1.efi(8) presents one statement that catches my attention:
• If ZFS is configured, boot1.efi will search the for zpools that are
bootable, preferring the zpool on the boot device over the others.
whatever that precisely means 🤔
  1. How does it recognizes that "ZFS is configured" ?
  2. How is the search criteria of the zpools? Does it just look for freebsd-zfs partitions or something else?
So far I can only guess it gathers all the devices then attempts scanning partitions on each, unless it has more knowledge about detecting zpools on whole devices.
 
As can be seen on the following snapshot, this is what happens when the VirtualBox 6.1.50 EFI firmware calls the FreeBSD EFI loader (loader.efi) where the "UEFI boot disk" only contains an EFI partition on disk nda0 and where I have configured an environment variable rootdev=nda1 similarly to what's prescribed in loader.efi(8):
"rootdev=nda1" (rootdev=diskXpY) has the wrong syntax. That syntax is valid for UFS. ZFS "rootdev" requires following format: "zfs:pool/filesystem" (i.e. rootdev=zfs:zroot/ROOT/default: , mind the colon at end of line).

See loader_simp(8) "BUILTIN EVIRONMENT VARIABLES", "currdev", and "ZFS FEATURES".

To change boot disks in a nextboot style, you could create aliases on both systems, which re-writes /efi/freebsd/loader.env, i.e.:

zroot on disk0, zroot1 on disk1, assuming ESP is mounted on /boot/efi, on both disks:
Code:
alias zdisk0="echo rootdev=zfs:zroot/ROOT/default: > /boot/efi/efi/freebsd/loader.env"
alias zdisk1="echo rootdev=zfs:zroot1/ROOT/default: > /boot/efi/efi/freebsd/loader.env"
On csh/tcsh skip the equal sign.
 
Last edited:
Unless VirtualBox has limitations in its UEFI firmware that cause problems, you should be able to achieve the desired effect by specifying the
-k flag to efibootmgr(8) when creating a boot entry.

The loader code for selecting the rootdev is in the function find_currdev() in the file
/usr/src/stand/efi/loader/main.c
There is an interesting comment there that may or may not be relevant: "We do not search other disks because it's a violation of the UEFI boot protocol to do so."
 
you should be able to achieve the desired effect by specifying the-k flag to efibootmgr(8) when creating a boot entry.
Nice!

Indeed, creating UEFI boot menus is a much better solution than my alias suggestion from post # 11.

But, I believe, in this case, the efibootmgr(8) [-e env] option is better suited.

This requires naming the pools with differently names. In this example zrootd0, zrootd1:

Disk 0 (ada0), pool name zrootd0:
Rich (BB code):
# efibootmgr -c -a -L FreeBSDzrootd0 -e rootdev=zfs:zrootd0/ROOT/default: -l ada0p1:/efi/freebsd/loader.efi

Disk 1, pool name zrootd1:
Rich (BB code):
# efibootmgr -c -a -L FreeBSDzrootd1 -e rootdev=zfs:zrootd1/ROOT/default: -l ada0p1:/efi/freebsd/loader.efi
Mind the colon at end of "rootdev" line, that's important.

To choose from the VBox UEFI boot manager, hit "Esc" repeatedly when the VM starts. In case you miss it, restart at the FreeBSD boot menu.

This was tested in a VirtualBox VM, so no UEFI firmware limitations.

Test setup:
- two disk VM
- install menu guided Root-on-ZFS with the installer .iso on disk0
- after installation finishes on disk0, create GPT on disk1 and freebsd-zfs partition
- create "zrootd1" pool on disk1p1
- zfs snapshot -r zrootd0/ROOT/default@sn1
- zfs send -R zrootd0/ROOT/default@sn1 | zfs receive -F zrootd1
- zpool set bootfs=zrootd1/ROOT/default zrootd1
- on zrootd1, rename hostname
- create UEFI boot menus
 
Last edited:
Back
Top