freebsd-update upgrade. Can not boot from root on zfs with new kernel

Hei,

I'm in the middle of updrading from FreeBSD 11.0-p12 to 11.1-RELEASE via freebsd-update -r 11.1-RELEASE upgrade.

I have merged files so far and rebooted once into the kernel but it won't mount my zpool so it stops here:
Code:
Mounting from zfs:zmicro/ROOT/default failed with error 6.

Loader variables:
 vfs.root.mountfrom=zfs:zmicro/ROOT/default

It is root on zfs.
Error 6 means he device is not configuered, but I see at least all 4 disks get attached before the mount fails.

zpool status when booting with the old 11.0-RELEASE-p12 GENERIC kernel:
Code:
  pool: zmicro
 state: ONLINE
  scan: scrub repaired 0 in 1h15m with 0 errors on Fri Jul 21 12:25:08 2017
config:

   NAME                                   STATE     READ WRITE CKSUM
   zmicro                                  ONLINE       0     0     0
     mirror-0                              ONLINE       0     0     0
       gpt/zroot3.top                   ONLINE       0     0     0
       gpt/zroot2.2nd-from-top  ONLINE       0     0     0
     mirror-1                              ONLINE       0     0     0
       gpt/zroot1.3rd-from-top  ONLINE       0     0     0
       gpt/zroot0.bottom            ONLINE       0     0     0

errors: No known data errors

Any ideas why the zpool is not mounted with the new kernel?
 
It's been a while since I've done this but usually you also need to re-apply the boot code in order to match a newer ZFS version. Check the bootcode option in gpart(8) for that.
 
Hei, thank you.

I thought it was something else as I can boot without problems if I boot the old 11.0 kernel and also with the new kernel It gets past the Beasty loader.
Could it be the bootcode anyway?

There came up more errors though and it became impossible to use zpool and zfs tools because they crashed right away.
In the end I created a 11.1-RELEASE bootable usb device and had plans about ubgrading the zpool or writing new bootcode.

It then showed that there where two pools with the same name, the one I had in use (was ONLINE and I could import it) and one with only two disks that was FAULTED
with corrupted data. There was this link in the description of the FAULTED pool when I called zpool import:
http://illumos.org/msg/ZFS-8000-EY

I followed the insctructions from the link but the pool was completely brolken and I couldn't import or destroy the pool by it's name nor it's ID.

That must have been a relic from the very beginning I started playing with ZFS and now it fell on my head :(

I have backups of everything so that is no katastrophy but annoying...

I'm still open for good tips as I still have the broken pool saved on two remaining disks. (others used for clean install).
 
Dear k.jacker,
I am by far no expert on zfs and I have also managed to find out what happens with two pools with the same name o_O. What I have in my notes is as below. My experience is limited to a simple mirror of two disks and zfs on a single disk.
First it is necessary to write the boot code to each disk of a mirror by
Code:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 EACH_DISK
Additionally it needs to configure that assuming zmicro is the name of the pool by
Code:
zpool set bootfs=zmicro/ROOT/default zmicro
Paranoid as I am I have now two pools I can boot from One is a pool mroot which is a mirror of two disks with root on zfs. In parallel I have a third disk with a pool zroot with root on zfs. They do not interfer. If I boot from one I can import the other one using altroot. The content of the single disk pool came from the mirror by zfs send and receive, not by a dedicated installation. I hope it helps :beer::).
 
Hei chrbr,
thanks for your good explanation.
I first had the idea to have a parallel FreeBSD on ufs and then use rsync to keep it in sync but your idea is much better!

I really appreciate your good tips. Now upgrading root on zfs isn't so scary anymore :beer: cheers :)
 
Back
Top