ZFS Cannot remove destroy unavailable zpool

If you are absolutely sure, this disk/partition is really not used, does not contain any valuable data anymore whatsoever, you can use "brute force" to kill the disk's contents by:

1. clean it with nvmecontrol(8) if it's a nvme:
nvmecontrol sanitize -a block /dev/nda0
(ada0 ain't no nvme, but I want to give a complete answer :cool:)

2. clean it with camcontrol(8) if it's a SSD:
camcontrol security ada0 to get the disks security password
camcontrol security ada0 -U user -s pwd to activate security
camcontrol security ada0 -U user -e pwd to erase the disk, or
camcontrol security ada0 -U user -h pwd for enhanced erase

3. simply overwrite it with dd if=/dev/zero of=/dev/ada0 bs=10M status=progress if it's a HDD, or if the other ways won't work. But to clean a SSD/NVMe this way is not a good idea. This way produces lots of write access to the disk. Also, if the disk is large, a complete wipe with dd may last very long. Overwrite the first, and the last couple of blocks with dd would "kill enough" so it can be handled more comfortably with gpart.

However, anyway you may ensure this disk can really be deleted before you actually nuke it (maybe being part of another pool?)
According to the messages you posted it could also be the case you changed your hardware config (added, removed, replaced, or changed drives), and you are not using partition labels 🧐 (☝️ be adviced to do so in the future 🤓), so one pool may try to access to a disk that was former ada0 but now has got another number, while the current ada0 now is another drive. (Can be. Don't know, But according to the messages you posted, such things need to be kept in mind, when trying to give advice from distance 🥸)
 
I guess something is corrupt it might be a bios renumbering of drives, the zpool cache file. Or something lingering.

gpart show,
Code:
root@:~ # gpart show
=>        34  1000215149  nda0  GPT  (477G)
          34   805310430        - free -  (384G)
   805310464    67108864    11  linux-swap  (32G)
   872419328    51200000     2  ms-basic-data  (24G)
   923619328    76595855    12  freebsd-zfs  (37G)

=>       34  976773101  ada1  GPT  (466G)
         34       2014        - free -  (1.0M)
       2048     204800     1  efi  (100M)
     206848      32768     2  ms-reserved  (16M)
     239616  975060992     3  ms-basic-data  (465G)
  975300608    1470464     4  ms-recovery  (718M)
  976771072       2063        - free -  (1.0M)

=>        40  1953525095  ada2  GPT  (932G)
          40  1953525088     1  freebsd-zfs  (932G)
  1953525128           7        - free -  (3.5K)

=>       34  976773101  diskid/DISK-S3R3NF1JB22028H  GPT  (466G)
         34       2014                               - free -  (1.0M)
       2048     204800                            1  efi  (100M)
     206848      32768                            2  ms-reserved  (16M)
     239616  975060992                            3  ms-basic-data  (465G)
  975300608    1470464                            4  ms-recovery  (718M)
  976771072       2063                               - free -  (1.0M)

=>        34  1953525101  ada0  GPT  (932G)
          34           6        - free -  (3.0K)
          40   419430400     1  freebsd-ufs  (200G)
   419430440   419430400     2  freebsd-zfs  (200G)
   838860840    67108864     3  freebsd-swap  (32G)
   905969704   384497624        - free -  (183G)
  1290467328   225902592     6  linux-data  (108G)
  1516369920   214861824     5  linux-data  (102G)
  1731231744   222291968     4  linux-data  (106G)
  1953523712        1423        - free -  (712K)

=>         6  1220934389  da0  GPT  (4.5T)
           6         250       - free -  (1.0M)
         256  1220933888    1  freebsd-zfs  (4.5T)
  1220934144         251       - free -  (1.0M)

=>         40  11720978352  da1  GPT  (5.5T)
           40   7168002008       - free -  (3.3T)
   7168002048   4552976344    2  freebsd-zfs  (2.1T)
 
zpool status,
Code:
zpool status
  pool: MYFREEBSD
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    MYFREEBSD   ONLINE       0     0     0
      ada0p2    ONLINE       0     0     0

errors: No known data errors

  pool: OLDDISK
 state: ONLINE
config:

    NAME                                          STATE     READ WRITE CKSUM
    OLDDISK                                       ONLINE       0     0     0
      gptid/f7694f18-03db-11ee-afaa-047c16075696  ONLINE       0     0     0

errors: No known data errors

  pool: ZUSB3
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
    The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
config:

    NAME                                          STATE     READ WRITE CKSUM
    ZUSB3                                         ONLINE       0     0     0
      gptid/f1b99fbe-06fc-11ee-8be0-047c16075696  ONLINE       0     0     0

errors: No known data errors

  pool: herman
 state: ONLINE
config:

    NAME                                          STATE     READ WRITE CKSUM
    herman                                        ONLINE       0     0     0
      gptid/c5ba7b87-025b-11ef-b2dd-047c16075696  ONLINE       0     0     0

errors: No known data errors

  pool: xxx
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    xxx         ONLINE       0     0     0
      nda0p12   ONLINE       0     0     0

errors: No known data errors
 
zpool list -v ,
Code:
AME                                           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
MYFREEBSD                                      199G  24.3G   175G        -         -    11%    12%  1.00x    ONLINE  -
  ada0p2                                       200G  24.3G   175G        -         -    11%  12.2%      -    ONLINE
OLDDISK                                        928G  2.74M   928G        -         -     0%     0%  1.00x    ONLINE  -
  gptid/f7694f18-03db-11ee-afaa-047c16075696   932G  2.74M   928G        -         -     0%  0.00%      -    ONLINE
ZUSB3                                         4.55T  4.42T   130G        -         -     1%    97%  1.00x    ONLINE  -
  gptid/f1b99fbe-06fc-11ee-8be0-047c16075696  4.55T  4.42T   130G        -         -     1%  97.2%      -    ONLINE
herman                                        2.11T  1.83T   288G        -         -     0%    86%  1.00x    ONLINE  -
  gptid/c5ba7b87-025b-11ef-b2dd-047c16075696  2.12T  1.83T   288G        -         -     0%  86.7%      -    ONLINE
xxx                                           36.5G  15.1G  21.4G        -         -     8%    41%  1.00x    ONLINE  -
  nda0p12                                     36.5G  15.1G  21.4G        -         -     8%  41.3%      -    ONLINE
 
zpool import,
Code:
   pool: ZT2
     id: 17034093544277358579
  state: UNAVAIL
status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
    devices and try again.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
 config:

    ZT2          UNAVAIL  insufficient replicas
      ada0p1     UNAVAIL  cannot open
root@:/home/x #
 
Ad
I guess something is corrupt it might be a bios renumbering of drives, the zpool cache file. Or something lingering.

gpart show,
Code:
root@:~ # gpart show
=>        34  1000215149  nda0  GPT  (477G)
          34   805310430        - free -  (384G)
   805310464    67108864    11  linux-swap  (32G)
   872419328    51200000     2  ms-basic-data  (24G)
   923619328    76595855    12  freebsd-zfs  (37G)

=>       34  976773101  ada1  GPT  (466G)
         34       2014        - free -  (1.0M)
       2048     204800     1  efi  (100M)
     206848      32768     2  ms-reserved  (16M)
     239616  975060992     3  ms-basic-data  (465G)
  975300608    1470464     4  ms-recovery  (718M)
  976771072       2063        - free -  (1.0M)

=>        40  1953525095  ada2  GPT  (932G)
          40  1953525088     1  freebsd-zfs  (932G)
  1953525128           7        - free -  (3.5K)

=>       34  976773101  diskid/DISK-S3R3NF1JB22028H  GPT  (466G)
         34       2014                               - free -  (1.0M)
       2048     204800                            1  efi  (100M)
     206848      32768                            2  ms-reserved  (16M)
     239616  975060992                            3  ms-basic-data  (465G)
  975300608    1470464                            4  ms-recovery  (718M)
  976771072       2063                               - free -  (1.0M)

=>        34  1953525101  ada0  GPT  (932G)
          34           6        - free -  (3.0K)
          40   419430400     1  freebsd-ufs  (200G)
   419430440   419430400     2  freebsd-zfs  (200G)
   838860840    67108864     3  freebsd-swap  (32G)
   905969704   384497624        - free -  (183G)
  1290467328   225902592     6  linux-data  (108G)
  1516369920   214861824     5  linux-data  (102G)
  1731231744   222291968     4  linux-data  (106G)
  1953523712        1423        - free -  (712K)

=>         6  1220934389  da0  GPT  (4.5T)
           6         250       - free -  (1.0M)
         256  1220933888    1  freebsd-zfs  (4.5T)
  1220934144         251       - free -  (1.0M)

=>         40  11720978352  da1  GPT  (5.5T)
           40   7168002008       - free -  (3.3T)
   7168002048   4552976344    2  freebsd-zfs  (2.1T)
Ada0p1 seems to be a UFS partition, second one is ZFS. 🤔 bios disk numbering is not reliable so I always recommend to create zpools with gpt labelled partitions instead of numbered disk names. Did there any change happened like reattaching the disk drives physically?
 
when i created the ZT2 zpool ada0p1 was a zfs partition but then i removed it and create a ufs partition.
But somewhere zpool import still thinks otherwise
 
when i created the ZT2 zpool ada0p1 was a zfs partition but then i removed it and create a ufs partition.
But somewhere zpool import still thinks otherwise
I don't think zpool import is showing something wrong here. You had ada0p1 as ZFS in the pool, you formatted it as UFS and now the pool can't function properly, that's expected for me. Was ada0p2 a ZFS partition all the time?
 
Alain doesn't say in a clear manner what problem he is trying to solve but I think the desired outcome is for zpool import to not show the pool ZT2 anymore.

If ada0p1 was previously a part of that ZFS pool but now the partition is used for a UFS filesystem, what may explain the observed behaviour is that some ZFS labels written to the partition are still there and readable because UFS hasn't overwritten them.

To confirm that the contents of ada0p1 is the cause of the problem, you could try zpool import -d /dev/ada0p1. If it finds the pool when you tell it to look only in that partition, then that's where the stale ZFS labels are.

The normal way to remove the ZFS labels from the partition is to run zpool labelclear /dev/ada0p1. However, because there is a UFS filesystem now in that partition, running that command may damage the filesystem.

My recommendation is to follow these steps:
  1. Make a backup of the UFS filesystem in ada0p1.
  2. Use zpool labelclear /dev/ada0p1 to remove the old ZFS labels from the partition.
  3. Verify with zpool import -d /dev/ada0p1 that no ZFS labels are found in the partition.
  4. Restore the UFS filesystem in ada0p1 from backup.
 
Back
Top