13.2p5 -> 14.0R - freebsd-update uprade - fails while creating boot environment

Hi all!
This week I tried to upgrade to 14.0-RELEASE from 13.2p5 but I failed. I ran into an zfs error while running freebsd-update upgrade and merging all the config changes.
I have many datasets that are used as disks for vms.

The upgrade fails with the following error:
Code:
.............
# RISC-V HTIF console                                                                                                                                                            [451/1499]
-rcons  "/usr/libexec/getty std.9600"   vt100   onifconsole secure
+rcons  "/usr/libexec/getty std.115200" vt100   onifconsole secure
Does this look reasonable (y/n)? y
To install the downloaded upgrades, run "/usr/sbin/freebsd-update install".
[freebsd:~ $]> sudo freebsd-update install
Password:
src component not installed, skipped
Creating snapshot of existing boot environment... cannot create 'zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090222/cbsd-cloud-openbsd-70.raw': 'canmount' does not apply to datasets of this typeerror when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
Failed to create bootenv 13.2-RELEASE-p5_2023-11-28_090222
failed.
Then I tried to delete that old dataset:
Code:
 sudo zfs destroy -r zroot/ROOT/default/cbsd-cloud-openbsd-70.raw
[freebsd:~ $]> sudo freebsd-update install
src component not installed, skipped
Creating snapshot of existing boot environment... cannot create 'zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090341/bcbsd-ubuntusrv1-dsk1.vhd': 'canmount' does not apply to datasets of this typeerror when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
Failed to create bootenv 13.2-RELEASE-p5_2023-11-28_090341
failed.

Here's a list of some datasets:
Code:
zfs list -r -o name,canmount,mountpoint
NAME                                                        CANMOUNT  MOUNTPOINT
zroot                                                       on        /zroot
zroot/ROOT                                                  on        none
zroot/ROOT/13.2-RELEASE-p1_2023-08-06_180926                noauto    /
zroot/ROOT/13.2-RELEASE-p2_2023-09-10_235747                noauto    /
zroot/ROOT/13.2-RELEASE-p3_2023-10-05_123918                noauto    /
zroot/ROOT/13.2-RELEASE-p4_2023-11-08_124951                noauto    /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090222                noauto    /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090341                noauto    /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090341/adc1           noauto    /usr/jails/jails-data/adc1-data
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_094647                noauto    /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_094647/adc1           noauto    /usr/jails/jails-data/adc1-data
zroot/ROOT/13.2-RELEASE_2023-04-19_142632                   noauto    /
zroot/ROOT/13.2-RELEASE_2023-06-22_164022                   noauto    /
zroot/ROOT/default                                          noauto    /
zroot/ROOT/default/adc1                                     on        /usr/jails/jails-data/adc1-data
zroot/ROOT/default/bcbsd-ubuntusrv1-dsk1.vhd                -         -
zroot/ROOT/default/cbsd-cloud-cloud-Ubuntu-x86-20.04.2.raw  -         -

Any ideas why beadm might be falling to create the new be?
 
Why are these created under zroot/ROOT/default?

This behavior existed long before the `beadm` and the appearance of ZFS in the FreeBSD installer ( ZFS support implemented in CBSD around ~2013-2014 y) and so far there have been no problems. The logic is simple - CBSD takes the pool to which the working directory belongs ( /usr/jails directory by default).
For example, if the user creates a personal dataset for the CBSD working directory, all working data resources will belong to this dataset:
Code:
# zfs create zroot/jails
# zfs set mountpoint=/usr/jails zroot/jails
# env workdir=/usr/jails /usr/local/cbsd/sudoexec/initenv
..
# cbsd jcreate jname=adc1

In this example we will see something like:
Code:
..
zroot/jails/adc1    70.4M  34.6G  70.4M  /usr/jails/jails-data/adc1-data

// .. or for bhyve cloud/image:
zroot/jails/cbsd-cloud-FreeBSD-ufs-13.2.0-RELEASE-amd64.raw  3.85G  33.6G  1.03G  -


Maybe now it's time to force users to always unconditionally create a personal dataset if they don't want to encounter this behavior.

However, in the official handbook (or anywhere) I did not find any prohibitions regarding the use of 'zroot/ROOT/' for custom/users datasets (or volumes). So far I have not studied how `beadm` works, perhaps it has a configuration that will prohibit creating snapshots for unknown datasets (for example, some kind of 'include' or 'exclude' list ).
 
Seems like a bug. Suggest filing a bug report and/or discussing this on some mailing list (may be freebsd-fs).
 
I have not studied how `beadm` works,

Check the manual page for beadm(8) in ports, <https://man.freebsd.org/cgi/man.cgi?query=beadm&sektion=8&manpath=freebsd-release-ports>. cc vermaden

Is the issue reproducible with bectl(8), integral to FreeBSD?


Particular attention to mount points.

Please read this comment alongside <https://forums.freebsd.org/posts/631446> if/when it appears.
 
Last edited:
So systems that have VMs created with CBSD can't upgrade to 14.0 right now, isn't it?

Or maybe disabling the creation of boot environments in freebsd-update?

 
Back
Top