ZFS Cannot destroy a boot environment

Hi,

I have a problem with bectl:
Code:
$ bectl list
BE          Active Mountpoint Space Created
avm-libidn2 -      -          8.05G 2019-11-23 12:26
avupg2      -      -          33.1M 2019-12-07 18:44
default     NR     /          818M  2019-08-08 12:38
Code:
$ sudo bectl destroy -o avm-libidn2
could not open snapshot's origin
$ sudo bectl destroy avm-libidn2
cannot destroy 'zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0': dataset already exists
unknown error

So I cannot destroy this BE nor the associated snapshot. Maybe I did something wrong in renaming some BEs and/or deleting snapshots. This output seems strange:
Code:
$ bectl list -s
BE/Dataset/Snapshot                              Active Mountpoint Space Created

avm-libidn2
  zroot/ROOT/avm-libidn2                         -      -          8.05G 2019-11-23 12:26
  avm-libidn2@2019-11-23-12:26:02-0              -      -          19.9M 2019-11-23 12:26

avupg2
  zroot/ROOT/avupg2                              -      -          8K    2019-12-07 18:44
    zroot/ROOT/default@2019-12-07-18:44:33-0     -      -          33.1M 2019-12-07 18:44

default
  zroot/ROOT/default                             NR     /          798M  2019-08-08 12:38
    zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0 -      -          19.9M 2019-11-23 12:26
  default@2019-12-07-18:44:33-0                  -      -          33.1M 2019-12-07 18:44

In my understanding, avm-libidn2 has two datasets but no snapshot. The corresponding snapshot is under the default BE...

Code:
$ zfs list -t snap
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0  19,9M      -  8,03G  -
zroot/ROOT/default@2019-12-07-18:44:33-0      33,1M      -  8,24G  -

What can I do to mend that and safely destroy the BE avm-libidn2 along with the snapshot zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0? (Hope vermaden isn't too far from here...)

PS: this system is a 12.1-RELEASE-p1 and all pkg have been upgraded.
 
Hi Mate

Im in the same boat i haven't managed to work out how to delete BE environments either,
probably missed something obvious but not sure what it is
 
The problem is only about avm-libidn2. I've destroyed many BEs without trouble.
I think there is something like a bug in bectl...
Code:
$ zfs destroy -nv zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0
cannot destroy 'zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0': snapshot has dependent clones
use '-R' to destroy the following datasets:
zroot/ROOT/avupg2
zroot/ROOT/default@2019-12-07-18:44:33-0
zroot/ROOT/default
would destroy zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0
I don't think that destroying zroot/ROOT/default will be a good thing...

Help!
 
Thanks for answering me. I did not try beadm, thinking the result would be the same. I will this evening.

What do you think of the output of zfs destroy -nv zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0, just in my post above?
 
Code:
$ zfs list -t all
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
zroot                                          678G  1,09T    88K  /zroot
zroot/ROOT                                    8,84G  1,09T    88K  none
zroot/ROOT/avm-libidn2                        8,05G  1,09T  8,03G  /
zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0  19,9M      -  8,03G  -
zroot/ROOT/avupg2                                8K  1,09T  8,24G  /
zroot/ROOT/default                             803M  1,09T  8,28G  /
zroot/ROOT/default@2019-12-07-18:44:33-0      34,1M      -  8,24G  -
zroot/partage                                  502G  1,09T   502G  /zroot/partage
zroot/prive                                    166G  1,09T   166G  /zroot/prive
zroot/tmp                                       96K  1,09T    96K  /tmp
zroot/usr                                     1,36G  1,09T    88K  /usr
zroot/usr/home                                 252K  1,09T   252K  /usr/home
zroot/usr/ports                                686M  1,09T   686M  /usr/ports
zroot/usr/src                                  709M  1,09T   709M  /usr/src
zroot/var                                     3,16M  1,09T    88K  /var
zroot/var/audit                                 88K  1,09T    88K  /var/audit
zroot/var/crash                                 88K  1,09T    88K  /var/crash
zroot/var/log                                 2,71M  1,09T  2,71M  /var/log
zroot/var/mail                                 112K  1,09T   112K  /var/mail
zroot/var/tmp                                   88K  1,09T    88K  /var/tmp

And concerning beadm:
Code:
$ sudo beadm destroy avm-libidn2
Are you sure you want to destroy 'avm-libidn2'?
This action cannot be undone (y/[n]): y
Boot environment 'avm-libidn2' was created from existing snapshot
Destroy '-' snapshot? (y/[n]): n                                             
Origin snapshot '-' will be preserved                                       
cannot destroy 'zroot/ROOT/avm-libidn2': filesystem has dependent clones
use '-R' to destroy the following datasets:
zroot/ROOT/avupg2
zroot/ROOT/default@2019-12-07-18:44:33-0
zroot/ROOT/default

I guess that the problem lies in the mixing names between snapshots and clones (partial output):
Code:
$ zfs list -o name,clones -t all
NAME                                          CLONES
zroot                                         -
zroot/ROOT                                    -
zroot/ROOT/avm-libidn2                        -
zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0  zroot/ROOT/default
zroot/ROOT/avupg2                             -
zroot/ROOT/default                            -
zroot/ROOT/default@2019-12-07-18:44:33-0      zroot/ROOT/avupg2
(...)

And it seems that there is another clone for zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0:
Code:
$ zfs destroy -nv zroot/ROOT/avm-libidn2
cannot destroy 'zroot/ROOT/avm-libidn2': filesystem has children
use '-r' to destroy the following datasets:
zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0

May I use zfs promote zroot/ROOT/default and zfs promote zroot/ROOT/avm-libidn2 to clear the avm-libidn2@2019-11-23-12:26:02-0 dependency?

I ain't familiar at all with these notions.
 
Try this:

Code:
zfs promote zroot/ROOT/default

zfs destroy -r zroot/ROOT/avm-libidn2
zfs destroy -r zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0
zfs destroy -r zroot/ROOT/avupg2
zfs destroy -r zroot/ROOT/default@2019-12-07-18:44:33-0

zfs destroy -R zroot/ROOT/avm-libidn2
zfs destroy -R zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0
zfs destroy -R zroot/ROOT/avupg2
zfs destroy -R zroot/ROOT/default@2019-12-07-18:44:33-0
 
There is an error at the third command:
Code:
$ sudo zfs promote zroot/ROOT/default                           
$ sudo zfs destroy -r zroot/ROOT/avm-libidn2                   
$ sudo zfs destroy -r zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0
cannot open 'zroot/ROOT/avm-libidn2': dataset does not exist
Seems zroot/ROOT/avm-libidn2@2019-11-23-12:26:02-0 has been renamed in
zroot/ROOT/default@2019-11-23-12:26:02-0
Code:
$ zfs list -t all                                               
NAME                                       USED  AVAIL  REFER  MOUNTPOINT       
zroot                                      678G  1,09T    88K  /zroot           
zroot/ROOT                                8,82G  1,09T    88K  none             
zroot/ROOT/avupg2                            8K  1,09T  8,24G  /                 
zroot/ROOT/default                        8,82G  1,09T  8,28G  /                 
zroot/ROOT/default@2019-11-23-12:26:02-0   397M      -  8,03G  -                 
zroot/ROOT/default@2019-12-07-18:44:33-0  34,5M      -  8,24G  -       
(partial listing)
Now, I have my default BE which seems ok and avupg2. Remains an unused snapshot zroot/ROOT/default@2019-11-23-12:26:02-0. Can I destroy it?
Code:
$ zfs destroy -nv zroot/ROOT/default@2019-11-23-12:26:02-0
would destroy zroot/ROOT/default@2019-11-23-12:26:02-0
would reclaim 397M

Is the BE system already in a stable state or must I continue to execute the remaining commands you gave me? I mean I see no problem to keep avupg2 if I can safely destroy it in some time.
 
I thought that you will just paste these commands without checking which one will fail - that was the intention :)

In other words, do zfs destroy -r and zfs destroy -R to everything in zroot/ROOT except of course zroot/ROOT/default dataset.
 
I blindly executed your commands. Most of them fail. But actually, only the default BE remains. I had to destroy the renamed snapshot I mentioned: zfs destroy zroot/ROOT/default@2019-11-23-12:26:02-0
All seems to be ok now.

I searched what happened with the help of a VM 12.0-RELEASE.

I created a boot environment be1, then added a file somewhere. After that, I created a new BE, be2 and added another file (just to make BEs different). Then, I activated be1, reboot, and tried to destroy the default BE in the idea to rename be1 as default.

With bectl, I got the same error that I've had:
Code:
cannot destroy 'zroot/ROOT/***': dataset already exists
unknown error
BUT, there is no problem if I do the same with be2. It's just because be1 is before be2, I think.

With beadm, there is no trouble. No matter if I select be1 or be2 as my new default BE.

So... bectl is seriously bugged. I will use beadm from now.
 
Problem actually solved for 12.1-STABLE r356602 (since 10 janurary 2020).
What |Removed |Added
----------------------------------------------------------------------------
Status|In Progress |Closed
Flags|mfc-stable12?, |mfc-stable12+,
|mfc-stable11? |mfc-stable11+
Resolution|--- |FIXED
 
Had the same issue in August of 2020. There were two snapshots that bectl didn't destroy. Tried beadm and that took care of one. The remaining snapshot was about 5 gigs, while the one I use was only about 600 M. So, then I used vermaden's other suggestion and ran zfs promote on the one I wanted to keep. After that the sizes changed, the one I wanted to destroy was about 600M and the one I am using was about 5G. At that point, beadm destroy worked. So, thanks vermaden, several months later your suggestions were quite helpful.
 
Back
Top