ZFS ZFS Boot Issue

When I import to a fresh directory on the live USB disk the following directory structure exists, so it looks like the import isn't getting `zroot/DEFAULT/root` - could this be it? If so any thoughts on how to correct it?
znew.png
 
The zroot/DEFAULT/root FS should be (and looks to be) set noauto, so it doesn’t mount unless explicitly mounted by itself; did you check the zpool bootfs setting? (The reason for the noauto setting is to support boot environments.)

You ran the installations of the boot partitions from the 13.0 bootcd?
 
The zroot/DEFAULT/root FS should be (and looks to be) set noauto, so it doesn’t mount unless explicitly mounted by itself; did you check the zpool bootfs setting? (The reason for the noauto setting is to support boot environments.)

You ran the installations of the boot partitions from the 13.0 bootcd?
How do I check the bootfs setting? The installation was originally run via the FreeBSD 13 boot image (usb.)
 
The zroot/DEFAULT/root FS should be (and looks to be) set noauto, so it doesn’t mount unless explicitly mounted by itself; did you check the zpool bootfs setting? (The reason for the noauto setting is to support boot environments.)

You ran the installations of the boot partitions from the 13.0 bootcd?
Also, if /boot is in zroot/DEFAULT/root as it appears to be, how is it supposed to find the boot zfsloader if zroot/DEFAULT/root is set to noauto?
 
Also, if /boot is in zroot/DEFAULT/root as it appears to be, how is it supposed to find the boot zfsloader if zroot/DEFAULT/root is set to noauto?
Because the bootfs property (use zpool get bootfs zroot to check it) should be pointing to it, and the loader knows to mount the bootfs filesystem at /.
 
I think I misread that as there's no freebsd-boot zpool when trying to import it, I'll give the UEFI boot a try - I know the boot sectors are written with GPT and zfs:
View attachment 10594
So after talking with the #zfs guys on irc.libera.chat it seems like zfs has issues on MBR bootloaders where the info on an individual drive exceeds 2TB, and this needs to be converted t o EFI. They suggested expanding the freebsd-boot partition to 256MB to facilitate this and taking a bit out of the freebsd-swap to accomodate it, then filling the freebsd-swap space back in with the remainder. I used double the swap space of the default so that shouldn't be an issue, but I don't know the commands for this. Does anyone know how to go from where I am (image in the reply) to an EFI-based zfs bootloader?
 
Can't find a bug report about gptzfsboot and disks > 2TiB.
The exact term is "BIOS" or "CSM" bootloaders not MBR.

Doing that on 12 disks will be laborious (especially if it doesn't fix the problem).
You should try on the first disk only to begin with, and try to boot with efi.
Code:
gpart delete -i1 mfid0
gpart delete -i1 mfid0
gpart add -t efi -s 256M mfid0
gpart add -t swap -a 1M mfid0
mount -t msdosfs /dev/mfid0p1 /mnt
mkdir -p /mnt/efi/boot
cp /boot/loader.efi /mnt/efi/boot/bootx64.efi

Verify if you have a loader.efi in /boot. If not, find where it lies. I don't understand from where you're doing the thing.

I advise you, before isuing these commands, to wait until someone else here validates them. Because I just did it with the manual page of gpart(8). These commands are destructive.
 
Verify if you have a loader.efi in /boot. If not, find where it lies. I don't understand from where you're doing the thing.

I advise you, before isuing these commands, to wait until someone else here validates them. Because I just did it with the manual page of gpart(8). These commands are destructive.

Commands are being run from a live FreeBSD 13 USB dongle attached to the machine with all the drives in it (just cutting to shell instead of going to the installer at the start.)
Thank you for the warning, will await confirmation from someone on the commands.
 
Can't find a bug report about gptzfsboot and disks > 2TiB.
The exact term is "BIOS" or "CSM" bootloaders not MBR.

Doing that on 12 disks will be laborious (especially if it doesn't fix the problem).
You should try on the first disk only to begin with, and try to boot with efi.
Code:
gpart delete -i1 mfid0
gpart delete -i1 mfid0
gpart add -t efi -s 256M mfid0
gpart add -t swap -a 1M mfid0
mount -t msdosfs /dev/mfid0p1 /mnt
mkdir -p /mnt/efi/boot
cp /boot/loader.efi /mnt/efi/boot/bootx64.efi

Verify if you have a loader.efi in /boot. If not, find where it lies. I don't understand from where you're doing the thing.

I advise you, before isuing these commands, to wait until someone else here validates them. Because I just did it with the manual page of gpart(8). These commands are destructive.

It looks like that would completely obliterate mfid0? Or just the first two (freebsd-boot and freebsd-swap) partitions? Should
Code:
gpart delete
take
Code:
-i1
on both counts?
 
This just delete the first two partitions of the disk. Once you deleted the freebsd-boot one, freebsd-swap changes to index 1, hence the delete two times on index 1. You can verify that all is going well by typing gpart show between each delete and add commands.

I just think... Maybe the bugs these people were speaking of are those I found in pmbr. The patch has been commited in 14-CURRENT some days ago. PR 233180
If it's that, this is absolutely not related to your current problem.
 
This just delete the first two partitions of the disk. Once you deleted the freebsd-boot one, freebsd-swap changes to index 1, hence the delete two times on index 1. You can verify that all is going well by typing gpart show between each delete and add commands.

I just think... Maybe the bugs these people were speaking of are those I found in pmbr. The patch has been commited in 14-CURRENT some days ago. PR 233180
If it's that, this is absolutely not related to your current problem.
Trying this now, looks like it is supposed to be:

Code:
gpart delete -i1 mfid0
gpart delete -i2 mfid0
gpart add -t efi -s 256M mfid0
gpart add -t freebsd-swap -a 1M mfid0

Failing at
Code:
mount -t msdosfs /dev/mfid0p1 /mnt

Are you sure msdosfs is the right type for efi? How do I check if msdosfs is available and if it's missing where is it found?

zmount.png
 
Got it with:

Code:
gpart delete -i1 mfid0
gpart delete -i2 mfid0
gpart add -t efi -s 256M mfid0
gpart add -t freebsd-swap -a 1M mfid0
newfs_msdos -F 32 -c 1 /dev/mfid0p1
mount -t msdosfs /dev/mfid0p1 /mnt
mkdir -p /mnt/efi/boot
cp /boot/loader.efi /mnt/efi/boot/bootx64.efi
umount /dev/mfid0p1

at least to the level of appearing to work, rebooting and trying now.
 
What controller card are you using? What does zpool status -v look like when you zpool import -R /altroot via the live-usb? How about zfs list -ro space zroot?

If (while the pool is imported with altroot) you zfs mount zroot/ROOT/default (which should mount the filesystem to /altroot/), do you see /altroot/bin/date (just as an example) present? (You can verify the mount points and state with zfs list -ro mounted,canmount,mountpoint,name zroot.)

Depending on what card you are using, you may be able to have the newer mrsas(4) driver attach rather than mfi(4) by putting hw.mfi.mrsas_enable="1" into /altroot/boot/device.hints while you have zroot/ROOT/default mounted (at /altroot) via the live-usb.
 
Sorry, I forgot the newfs of efi partition. I thought that the gpart indexes will readjust after delete but no.

The kernel has started which is a good improvement. The kernel is located on the zfs partition, so it's readable. You should remove the USB stick.
 
What controller card are you using? What does zpool status -v look like when you zpool import -R /altroot via the live-usb? How about zfs list -ro space zroot?

If (while the pool is imported with altroot) you zfs mount zroot/ROOT/default (which should mount the filesystem to /altroot/), do you see /altroot/bin/date (just as an example) present? (You can verify the mount points and state with zfs list -ro mounted,canmount,mountpoint,name zroot.)

Depending on what card you are using, you may be able to have the newer mrsas(4) driver attach rather than mfi(4) by putting hw.mfi.mrsas_enable="1" into /altroot/boot/device.hints while you have zroot/ROOT/default mounted (at /altroot) via the live-usb.
It seems like everything is correct with those commands, at least as far as I can tell:

z1.png
z2.png
z3.png
z4.png

Trying with (with :wq! and not :wq because it was read-only):
z5.png

Yields the same error, but after some differing messages:
 
Sorry, I forgot the newfs of efi partition. I thought that the gpart indexes will readjust after delete but no.

The kernel has started which is a good improvement. The kernel is located on the zfs partition, so it's readable. You should remove the USB stick.
Would the USB stick actually interfere? I'm working on it via a network KVM inside a remote desktop session, the actual server is about an hour away.
 
Your zroot filesystem should be canmount=off and mountpoint=none; right now, both zroot and zroot/ROOT/default appear to have the same mountpoint (/) set, which is never a good plan.

Additionally, there's 488k worth of stuff in the zroot filesystem, which should be empty; before updating the mountpoints/canmount settings, what do you see (excluding /tmp, /usr, and /var) in /altroot/ when zroot/ROOT/default is not mounted.

After checking that, and potentially copying things that you want into a safe place to copy back into the zroot/ROOT/default dataset, make sure the zroot/usr, zroot/var and zroot/tmp still have the appropriate mountpoints set. So long as they aren't inheriting from zroot (equiv. mountpoint is set locally) they should be fine.

It looks like the device hint took, as your swap devices have moved on you; but it hasn't fixed anything (I'm hoping the mountpoint issue above does the trick.) You can likely remove the device hint once you get things booting up successfully, but the mrsas driver is newer and might be a better option in the long run if it is working with your card... (You still haven't mentioned what the card is?)
 
Your zroot filesystem should be canmount=off and mountpoint=none; right now, both zroot and zroot/ROOT/default appear to have the same mountpoint (/) set, which is never a good plan.

Additionally, there's 488k worth of stuff in the zroot filesystem, which should be empty; before updating the mountpoints/canmount settings, what do you see (excluding /tmp, /usr, and /var) in /altroot/ when zroot/ROOT/default is not mounted.

After checking that, and potentially copying things that you want into a safe place to copy back into the zroot/ROOT/default dataset, make sure the zroot/usr, zroot/var and zroot/tmp still have the appropriate mountpoints set. So long as they aren't inheriting from zroot (equiv. mountpoint is set locally) they should be fine.

It looks like the device hint took, as your swap devices have moved on you; but it hasn't fixed anything (I'm hoping the mountpoint issue above does the trick.) You can likely remove the device hint once you get things booting up successfully, but the mrsas driver is newer and might be a better option in the long run if it is working with your card... (You still haven't mentioned what the card is?)
Contents of /altroot look like junk:
zcontents.png

Pool info:
zf.png

And it works:
zworking.png

Still has the swap issues so I'm going to try removing the /boot/device.hints entry. Guessing I should also apply the boot sector changes for EFI to the other 11 drives?
 
Just for posterity, the “root” zfs filesystem (zroot) was set to be mountable with a mountpoint of ‘/‘.

This caused issues when it was mounted over the “actual” root filesystem at zroot/ROOT/default (which was mounted at ‘/‘ by the loader.)

There were separate issues that were resolved by moving to the EFI boot loader.

Sound right?
 
Just for posterity, the “root” zfs filesystem (zroot) was set to be mountable with a mountpoint of ‘/‘.

This caused issues when it was mounted over the “actual” root filesystem at zroot/ROOT/default (which was mounted at ‘/‘ by the loader.)

There were separate issues that were resolved by moving to the EFI boot loader.

Sound right?
I think the mountpoint was something I screwed up in trying to fix the original issue, which is that the zfs bootloader or zfs has an issue with drives exceeding 2TB which doesn't present itself until a drive actually exceeds 2TB of stored data, and is specific to the BIOS/CSM bootloader but not the EFI one.
 
Back
Top