Solved Just upgraded to 14.1, I see weird zfs snapshots I didn't create

Hello All
After a (kinda successful) upgrade from 13.1 --> 14.1 I see today those :
Code:
achill@smadevnu:~ % zfs list -rt snapshot
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
zroot/ROOT/default@2024-09-20-16:10:32-0   108M      -  69.9G  -
zroot/ROOT/default@2024-09-23-09:14:39-0  7.20M      -  70.6G  -
zroot/ROOT/default@2024-09-23-09:59:01-0  3.85M      -  70.8G  -
zroot/ROOT/default@2024-09-23-15:30:50-0  27.7M      -  74.4G  -
zroot/ROOT/default@2024-09-23-18:22:43-0  5.46M      -  74.8G  -
The point is I never created them by hand, they coincide with some freebsd-update fetch / install I gave in order to bring /usr/src (and sys) up to date. Unfortunately, while pkg seem to write to the log (messages), freebsd-update does not log, so I can only guess e.g. that the last snapshot zroot/ROOT/default@2024-09-23-18:22:43-0 coincides with the last freebsd-update that actually touched the system.

I don't want to leave random snapshots hanging. What's the catch here ?
 
Thank you SirDice , it seems to be this :

sh:
root@smadevnu:/usr/home/achill # bectl list
BE                                Active Mountpoint Space Created
13.1-RELEASE-p9_2024-09-23_091439 -      -          7.20M 2024-09-23 09:14
13.1-RELEASE_2024-09-20_161032    -      -          108M  2024-09-20 16:10
14.1-RELEASE-p5_2024-09-23_095901 -      -          3.86M 2024-09-23 09:59
14.1-RELEASE-p5_2024-09-23_153050 -      -          27.7M 2024-09-23 15:30
14.1-RELEASE-p5_2024-09-23_182243 -      -          5.58M 2024-09-23 18:22
default                           NR     /          81.4G 2022-07-09 02:03
root@smadevnu:/usr/home/achill #

What do I do now ? First time I encounter BE and relatively new to ZFS (albeit successfully using it for 2 years with great pleasure and satisfaction). BE are definitely cool, if I understand correctly the concept, good to have in mind, but for the moment I would not like this overhead, I know snapshots are not for free.
 
freebsd-update creates new BEs that represent "before" the update was applied. Once you are satisfied the latest update is working you can safely delete them. Don't just use zfs commands to remove snapshots that represent Boot Environments. You can shoot yourself in the foot. Get in the habit of doing bectl list before and after freebsd-update.

as root
bectl destroy -o <BEname>

I typically start with the oldest on in your case 13.1-RELEASE-p9_2024-09-23_091439 then repeat.
After you remove one, do bectl list and you can get an idea of how ZFS shares blocks between things.

Edit:
I use the bectl rename command to rename the BE to reflect "what" it is. I have no "default" I have stuff named like 14.1-RELEASE-p4, etc.
 
One quick way (assuming the 'default' BE is working and active) to remove all those "RELEASE" BEs; bectl list -H | cut -f1 -w | grep RELEASE | xargs -n1 bectl destroy
 
BE are definitely cool, if I understand correctly the concept, good to have in mind, but for the moment I would not like this overhead, I know snapshots are not for free.
The auto creation is controlled by the CreateBootEnv parameter. BEs (actually their related snapshots) created in short succession after each other have a relative minor impact on storage (it's the diff that determines the storage impact); a lot of BEs can clutter your BE overview and management though. Further on that and some more on BEs, I wrote here
 
Back
Top