ZFS Removing a ZFS pool whos parts have been reallocated

So, I've gotten myself into an odd situation. I'm doing a fresh install of 15.0.

I was migrating a root-on-zfs pool to a smaller partition, and at the end, the new pool seems okay, but "zfs import" is showing the original pool, UNAVAIL, and the devices it thinks are in the mirror are both devices/paritions that have been put into the new mirror.

I don't seem to be able to "zpool destroy" it (cannot open), and I can't import it (vdev problem; no such pool or dataset).

And, labelclear isn't an option as the drives are both in the working mirror. I did try to labelclear /dev/nda1p4 before I re-added it to the new mirror (via replace of a bogus element), but that failed to do anything even then.

I just need to tell the system that that pool doesn't exist. How do I know why it thinks it does?
Info below, missing a few kernel prints the console sees that the script didn't.

Thanks.

Code:
Script started on Wed Jan 28 21:30:28 2026
root@:~ # zpool status                                             
  pool: zroot2                 
 state: ONLINE                                                     
  scan: resilvered 1.75G in 00:00:01 with 0 errors on Wed Jan 28 21:04:44 2026
config:                                                                                                                                 
                                                                                                                                        
        NAME        STATE     READ WRITE CKSUM                                                                                         
        zroot2      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            nda0p4  ONLINE       0     0     0
            nda1p4  ONLINE       0     0     0

errors: No known data errors
root@:~ # zpool import
  pool: zroot
    id: 12777438546136241594
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
config:

        zroot       UNAVAIL  insufficient replicas
          mirror-0  UNAVAIL  insufficient replicas
            nda1p4  UNAVAIL  invalid label
            nda0    UNAVAIL  cannot open
root@:~ # zpool destroy -f zroot
cannot open 'zroot': no such pool
root@:~ # zpool import -N -f zroot
cannot import 'zroot': no such pool or dataset
        Destroy and re-create the pool from
        a backup source.
root@:~ #
 
Okay. So, a few bits of trial and error, and mostly thinking, I came up with "wait, where could it be loading that sense from?" The ZFS partion _used_ to be the majority of the disk, and I've made it smaller. So, I added a large multi-TB partition after that ZFS partition, and zero'd them out. Both disks. Took a few hours, but after that (and deleting the emptied partitions), I was able to reboot and have only the one pool visible now.
A little more finagling to get the boot options right for the pool, and I think I'm good. Lesson learned? I'm not sure. Clearly, zero everything that ever was a ZFS pool member. There may have been better/easier way to accomplish what I did, but at least I figured out a way out of the mess I'd created.

For posterity....
 
Yeah. It would've needed to have been done twice while trying to move a mirror between only the two devices, but put in the right places that would've worked.
A better option would be for the FreeBSD installer to allow me to adjust the size of the large ZFS partition when choosing to install the "root on ZFS" mode. But. I decided that doing what I did was likely easier than trying to hand-build up a configuration that was basically the same from scratch. I still think I'm likely right in that.
 
Back
Top