Solved beadm - system will not boot

I created a FreeBSD-12.1 boot environment using beadm. I activated it without making any changes. Now my system halts during the boot process. Just how do I switch back to the working environment?

From the live CD I have already attached the geli devices and entered the key and imported the pool (zroot) to /tmp/mounted.
 
I created a FreeBSD-12.1 boot environment using beadm. I activated it without making any changes. Now my system halts during the boot process. Just how do I switch back to the working environment?

From the live CD I have already attached the geli devices and entered the key and imported the pool (zroot) to /tmp/mounted.
The bootloader should offer an option to boot into a different boot environment. Failing that if you need to use the live CD, the base system has bectl(8) which is similar to beadm(8), or if you want to do it manually you would need to alter the canmount properties of the datasets that comprise your boot environments. Datasets belonging to the active boot environment should have canmount=noauto whereas datasets belonging to inactive boot environments should have canmount=off. Then set the correct boot file system on the pool (i.e. zpool set bootfs=zroot/ROOT/default).
 
Here is what I did:
boot to livecd :

Code:
geli attach ada0p3 <enter passwd>
geli attach ada1p3 <enter passwd>
mkdir /tmp/altroot
zpool import -o altroot=/tmp/mounted/v4root -N -a
zfs mount zroot/ROOT/default
zfs mount -a
zfs snapshot -r zroot@vhost04_RCVY2
zfs send -R zroot@vhost04_RCVY2 | ssh 192.168.216.45 zfs receive zroot/vhost04_RCVY2
. . .

Some time later:
Code:
zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot   424G   467G    96K  /tmp/mounted/v4root/zroot
root@vhost04:~ # zfs list
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
zroot                                     424G   467G    96K  /tmp/mounted/v4root/zroot
zroot/ROOT                               94.2G   467G    96K  none
zroot/ROOT/12.1-RELEASE-pkgupgrade       1.14M   467G  40.6G  /tmp/mounted/v4root
zroot/ROOT/default                       94.2G   467G  40.6G  /tmp/mounted/v4root
. . .

 zfs get all zroot/ROOT/12.1-RELEASE-pkgupgrade | grep 'mount\|boot'
zroot/ROOT/12.1-RELEASE-pkgupgrade  mounted                 no                                      -
zroot/ROOT/12.1-RELEASE-pkgupgrade  mountpoint              /tmp/mounted/v4root                     local
zroot/ROOT/12.1-RELEASE-pkgupgrade  canmount                noauto                                  local

zfs get all zroot/ROOT/default | grep 'mount\|boot'
zroot/ROOT/default  mounted                 yes                     -
zroot/ROOT/default  mountpoint              /tmp/mounted/v4root     local
zroot/ROOT/default  canmount                noauto                  local

root@vhost04:~ # zfs set canmount=off zroot/ROOT/12.1-RELEASE-pkgupgrade

zfs get all zroot/ROOT/12.1-RELEASE-pkgupgrade | grep 'mount\|boot'
zroot/ROOT/12.1-RELEASE-pkgupgrade  mounted                 no                                      -
zroot/ROOT/12.1-RELEASE-pkgupgrade  mountpoint              /tmp/mounted/v4root                     local
zroot/ROOT/12.1-RELEASE-pkgupgrade  canmount                off                                     local

zpool set bootfs=zroot/ROOT/default zroot

zpool get all | grep boot
zroot  bootfs                         zroot/ROOT/default             local
zroot  bootsize                       -                              default

zpool export zroot
cannot unmount '/tmp/mounted/v4root': Device busy
root@vhost04:~ # shutdown -r now
Shutdown NOW!

System restarted normally.

There was one caveat specific to the hardware. To boot from the LIVECD key I had to disable EFI in the BIOS. To boot from the HDDs after cleaning up them mess from beadm I had to enable EFI in the BIOS.
 
Code:
zroot/ROOT/12.1-RELEASE-pkgupgrade  canmount                noauto                                  local
zroot/ROOT/default  canmount                noauto                  local

System restarted normally.

Glad you got your system booting again. Looks like both boot environments had canmount=noauto set which normally shouldn't happen. At least that's what I think. Did you try to create/activate a new boot environment to see if that fails again?
 
Back
Top