Solved boot environments, beadm

I have been using beadm for managing boot environments. I expect that when I call beadm create, it is creating a new boot environment from the active boot environment.

I am trying to understand how another piece of software keeps coming back seemingly after rebooting into a new BE. My process is:

1. create new boot environment
2. activate new boot environment
3. reboot into new boot environment
4. make changes


Is this correct?
 
Boot Environments, like a lot of things in ZFS are "Copy On Write". So at your step 3, before your step 4, new BE is identical to old BE (the previously active one).
Until you make changes new BE == old BE.
Make changes to new BE, well new BE is "old BE with any changes applied". If a change is "delete pkg A" then booting into "new BE" you don't have pkg A. If you boot into "old BE" you will have pkg A.

Your specific example:
"I am trying to understand how another piece of software keeps coming back seemingly after rebooting into a new BE."
If you never delete "piece of software" from "new BE", then "new BE" has "piece of software".

It takes a while to get all the associations lined up in your head (or at least did for me), but basic principle is "a new BE is really changes applied to old BE"
 
Ok, that works like I think it should.

I believe, the order of operations for me is that I had already removed the piece of software a while back prior to creating the new BE. Once I create the new BE, at that point in time, it should be a replica of the current state (where the software doesn't exist). I need to watch this more closely, maybe I'm confusing myself.
 
  • Like
Reactions: mer
I need to watch this more closely, maybe I'm confusing myself.
I'm also confused about it as they are different instructions here on the forums.

You describe create the BE, boot to it, do your changes. If it's all work well just keep going with the BE.

Other threads say before a upgrade create a new BE but stay in the old (default) one do the upgrade.
If all good keep going, if not fall back to the new BE done before the upgrade.

From a logic point of view I would say that's the same but I have not enough knowledge about ZFS to know if
there is maybe more to this.

How are the experts here on the forums deal with it?
 
Ok, that works like I think it should.

I believe, the order of operations for me is that I had already removed the piece of software a while back prior to creating the new BE. Once I create the new BE, at that point in time, it should be a replica of the current state (where the software doesn't exist). I need to watch this more closely, maybe I'm confusing myself.

Manually installed ports/packages go into /usr, which isn't part of a BE - so there seems to be something else off. EDIT: that was wrong - see my next post

You can use BEs either as a snapshot of the state before any changes - in that case you don't change the active BE and always stay on the same (usually named "default"). If something goes wrong, you change back to the BE that has been created before the upgrade. This would be the same logic as normal snapshots.
OR you can name the new BE for what version you are about to upgrade to, which is the procedure which you usually choose if you perform the first 'freebsd-update fetch && install' inside a jail to minimize the amount of reboots and downtime: Create BE, run jail with that BE, perform upgrades, activate BE, reboot into the upgraded system. The caveat with that approach is, that all changes TO THE BASE SYSTEM (i.e. the datasets that are part of a BE) you made between creating the new BE and rebooting into it are of course not part of that upgraded BE.

But for both approaches the fact still stands, that /usr isn't part of a BE, so nothing changes here when switching between BEs. That's what usually bites you if you try to change back to an older BE after major release upgrades... EDIT: it bit me because a host had messed up canmount properties
 
Manually installed ports/packages go into /usr, which isn't part of a BE
I don't think that's correct, bectl(8) states the following:
... Instead, they're organized off in zroot/ROOT, and they rely on
datasets elsewhere in the pool having canmount set to off. For instance,
a simplified pool may be laid out as such:

% zfs list -o name,canmount,mountpoint
NAME CANMOUNT MOUNTPOINT
zroot
zroot/ROOT noauto none
zroot/ROOT/default noauto none
zroot/usr off /usr
zroot/usr/home on /usr/home
zroot/var on /var

In that example, zroot/usr has canmount set to off, thus files in /usr
typically fall into the boot environment because this dataset is not
mounted. zroot/usr/home is mounted, thus files in /usr/home are not in
the boot environment.

So in default install /usr *IS* part of the BE and it's decided by its canmount property.
 
Wow, I just realized I really had this wrong for several years - but only because what I described bit me *really* hard once on a host that I upgraded from 11.3 to 12.0 (like 'sitting on the floor with a keybord on the lap in front of a server rack until late at night'-hard).
I just checked a few hosts and yes, /usr has 'canmount=noauto' set on all of them; but it turns out that on that one host (still runs with 12.4-RELEASE) there *is* something off:
Code:
# zfs list -ro name,canmount,mountpoint zroot
NAME                                     CANMOUNT  MOUNTPOINT
zroot                                          on  /zroot
zroot/ROOT                                     on  none
zroot/ROOT/12.2-RELEASE                    noauto  /
zroot/ROOT/12.3-RELEASE                    noauto  /
zroot/ROOT/12.4-RELEASE                    noauto  /
zroot/tmp                                      on  /tmp
zroot/usr                                     on  /usr
zroot/usr/ports                                on  /usr/ports
zroot/usr/src                                  on  /usr/src
zroot/var                                     on  /var
zroot/var/audit                                on  /var/audit
zroot/var/crash                                on  /var/crash
zroot/var/log                                  on  /var/log
zroot/var/mail                                 on  /var/mail
zroot/var/tmp                                  on  /var/tmp

Well, thats somewhat embarassing. But at least I can now omit those extra snapshots I've been taking of the /usr dataset... (but TBH, I still feel safer when I have snapshots of the *full* zroot pool before major upgrades...)

I now wonder where/when those canmount properties got messed up. IIRC that host was set up back in 10.X days - anyone remembers if BEs were "only the root dataset" back then?

Anyhow - sorry for the confusion I caused.
 
So in default install /usr *IS* part of the BE and it's decided by its canmount property.
Almost.
My understanding which may or may not be 100% correct or agreed by the experts.

Last install I did, zroot/usr has canmount off (maybe its noauto now) with a mountpoint of /usr.
What happens to something like /usr/local, where pkg typically installs to? Well, as long as you don't have a dataset, say zroot/mylocal with mountpoint of /usr/local, then anything in /usr/local is part of a BE.
Example on my system. Notice how zroot/user is canmount off (i never changed it, this is from installer), then zroot/usr/home mountpoint /usr/home? Well the directory /usr/home has it's own dataset so it's not part of a BE.
There is no dataset with a mountpoint of /usr/local so /usr/local is part of the BE. Same thing with zroot/var mountpoint /var canmount off. A lot of things under /var are part of a BE, but /var/log, /var/mail, etc are not because they have their own datasets.

Code:
zfs list -ro name,canmount,mountpoint zroot
NAME                        CANMOUNT  MOUNTPOINT
zroot                       on        /zroot
zroot/ROOT                  on        none
zroot/ROOT/13.1-RELEASE-p2  noauto    /
zroot/ROOT/13.1-RELEASE-p4  noauto    /
zroot/ROOT/13.1-RELEASE-p5  noauto    /
zroot/tmp                   on        /tmp
zroot/usr                   off       /usr
zroot/usr/home              on        /usr/home
zroot/usr/ports             on        /usr/ports
zroot/usr/src               on        /usr/src
zroot/var                   off       /var
zroot/var/audit             on        /var/audit
zroot/var/crash             on        /var/crash
zroot/var/log               on        /var/log
zroot/var/mail              on        /var/mail
zroot/var/tmp               on        /var/tmp


A cool feature of bectl and beadm is the ability to mount a BE temporarily; this lets you go and poke around at the one you are not booted into. I've done this when I've mucked up a config file and it's handy to see what is actually part of the BE.
 
The temporary mount thing is great because you can upgrade offline and then reboot into the fully upgraded BE and only reboot once. I do something like this, notice the -b switch to freebsd-update(8) and -c on pkg(8).

Code:
bectl create 13.1-RELEASE
bectl mount 13.1-RELEASE /mnt
freebsd-update upgrade -b /mnt -r 13.1-RELEASE
freebsd-update -b /mnt install
freebsd-update -b /mnt install
freebsd-update -b /mnt install
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 ada0
mount -t devfs devfs /mnt/dev
pkg -c /mnt upgrade -f
umount /mnt/dev
bectl umount 13.1-RELEASE
bectl activate 13.1-RELEASE
shutdown -r now
zpool upgrade -a
bectl destroy 13.0-RELEASE
 
  • Like
Reactions: mer
I have been using beadm for managing boot environments. I expect that when I call beadm create, it is creating a new boot environment from the active boot environment.

I am trying to understand how another piece of software keeps coming back seemingly after rebooting into a new BE. My process is:

1. create new boot environment
2. activate new boot environment
3. reboot into new boot environment
4. make changes


Is this correct?

Check this:

- https://vermaden.wordpress.com/2021/02/23/upgrade-freebsd-with-zfs-boot-environments/

You may also add other new/fresh BEs aside if needed - with different FreeBSD version for example:

- https://vermaden.wordpress.com/2021/10/19/other-freebsd-version-in-zfs-boot-environment/

Regards.
 
The temporary mount thing is great because you can upgrade offline and then reboot into the fully upgraded BE and only reboot once. I do something like this, notice the -b switch to freebsd-update(8) and -c on pkg(8).

Code:
bectl create 13.1-RELEASE
bectl mount 13.1-RELEASE /mnt
freebsd-update upgrade -b /mnt -r 13.1-RELEASE
freebsd-update -b /mnt install
freebsd-update -b /mnt install
freebsd-update -b /mnt install
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 ada0
mount -t devfs devfs /mnt/dev
pkg -c /mnt upgrade -f
umount /mnt/dev
bectl umount 13.1-RELEASE
bectl activate 13.1-RELEASE
shutdown -r now
zpool upgrade -a
bectl destroy 13.0-RELEASE

If you do it from within chroot(8) its less typing :)

Code:
(host) # beadm create 13                        # create new '13' ZFS Boot Environment
       Created successfully
(host) # beadm mount 13 /var/tmp/BE-13          # mount new '13' BE somewhere
       Mounted successfully on '/var/tmp/BE-13'
(host) # chroot /var/tmp/BE-13                  # make chroot(8) into that place
  (BE) # mount -t devfs devfs /dev              # mount the devfs(8) in that BE
  (BE) # rm -rf /var/db/freebsd-update          # remove any old patches
  (BE) # mkdir /var/db/freebsd-update           # create fresh dir for patches
  (BE) # freebsd-update upgrade -r 13.0-BETA3   # fetch the patches needed for upgrade
  (BE) # freebsd-update install                 # install kernel and kernel modules
  (BE) # freebsd-update install                 # install userspace/binaries/libraries
  (BE) # pkg upgrade                            # upgrade all packages with pkg(8)
  (BE) # freebsd-update install                 # remove old libraries and files
  (BE) # exit                                   # leave chroot(8) environment
(host) # umount /var/tmp/BE-13/dev              # umount the devfs(8) in that BE
(host) # beadm activate 13                      # activate new '13' BE
       Activated successfully
 
  • Like
Reactions: mer
I think I have it sorted out. I haven't had the issue since. I am more careful about rebooting into the new BE as soon as I create it so that I don't inadvertently exclude changes.
 
Back
Top