ZFS FreeBSD-13 release: how do I remove an orphaned pool?

as above.
  1. The pool was originally created using Ubuntu focal and raidz1.
  2. I copied off the data I want off the pool.
  3. I zeroed out the 1st 1GB of the disks (dd bs=1M count=1000 of=/dev/zero of=/dev/my-disk-here)
  4. I recreated the pool again (same name as the old pooll) on Ubuntu focal but this time I used raidz2 instead of raidz1
  5. I recopied the data back into the new pool.
  6. I reimported the pool into my new FreeBSD13 installation and no problem with this. I can browse the folders/files in the new pool. I can also create new folders/files. The new pool uses the same disk devices (da1-da8) as the orphaned pool.
  7. Both the old and new pool was created using whole disks.
Thanks

Before importing the new pool
Code:
zpool import
   pool: mypool
     id: 11273196660544901619
  state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
config:

        mypool  FAULTED  corrupted data
          raidz1-0      FAULTED  corrupted data
            da4         ONLINE
            da3         ONLINE
            da2         ONLINE
            da1         ONLINE
            da8         ONLINE
            da7         ONLINE
            da6         ONLINE
            da5         ONLINE

   pool: mypool
     id: 59471106146611490
  state: ONLINE
status: Some supported features are not enabled on the pool.
action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
config:

        mypool                ONLINE
          raidz2-0                    ONLINE
            gpt/zfs-9603a8d5a07ea754  ONLINE
            gpt/zfs-a466a81dbf6f549d  ONLINE
            gpt/zfs-208ba3a392b7dbd3  ONLINE
            gpt/zfs-a4763ffcc6464bf9  ONLINE
            gpt/zfs-3357518e86495a9c  ONLINE
            gpt/zfs-b97194d6457b48e3  ONLINE
            gpt/zfs-d2e78371729992b6  ONLINE
            gpt/zfs-cfeb969a252e853b  ONLINE

after import the new pool
Code:
    # zpool import
       pool: mypool
         id: 11273196660544901619
      state: UNAVAIL
    status: One or more devices are missing from the system.
     action: The pool cannot be imported. Attach the missing
            devices and try again.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
     config:

            mypool  UNAVAIL  insufficient replicas
              raidz1-0      UNAVAIL  insufficient replicas
                da4         UNAVAIL  cannot open
                da3         UNAVAIL  cannot open
                da2         UNAVAIL  cannot open
                da1         UNAVAIL  cannot open
                da8         UNAVAIL  cannot open
                da7         UNAVAIL  cannot open
                da6         UNAVAIL  cannot open
                da5         UNAVAIL  cannot open

the results for zdb -l /dev/my-device-here is the same for DA1-8 except for the GUID value. Also the output is the same if I ran it before/after I imported the new pool
Code:
zdb -l /dev/da1
failed to unpack label 0
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'mypool'
    state: 0
    txg: 78482
    pool_guid: 11273196660544901619
    hostid: 246261765
    hostname: 'mynas'
    top_guid: 2160549355363473968
    guid: 17763965900102685915
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 2160549355363473968
        nparity: 1
        metaslab_array: 138
        metaslab_shift: 34
        ashift: 13
        asize: 48009362538496
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 15878986555622070549
            path: '/dev/da1'
            whole_disk: 1
            DTL: 1264
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 3539004344246636682
            path: '/dev/da2'
            whole_disk: 1
            DTL: 1262
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 8652397971955596157
            path: '/dev/da3'
            whole_disk: 1
            DTL: 1260
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 17763965900102685915
            path: '/dev/da4'
            whole_disk: 1
            DTL: 1258
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 18398748142412345969
            path: '/dev/da5'
            whole_disk: 1
            DTL: 1256
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 10378637980108753050
            path: '/dev/da6'
            whole_disk: 1
            DTL: 1254
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 17641754961935991446
            path: '/dev/da7'
            whole_disk: 1
            DTL: 1248
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 12458471051902098756
            path: '/dev/da8'
            whole_disk: 1
            DTL: 1246
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 2 3

results of gpart show da1
Code:
gpart show da1
=>         34  11721045101  da1  GPT  (5.5T)
           34         2014       - free -  (1.0M)
         2048  11721025536    1  apple-zfs  (5.5T)
  11721027584        16384    9  solaris-reserved  (8.0M)
  11721043968         1167       - free -  (584K)
 
I zeroed out the 1st 1GB of the disks (dd bs=1M count=1000 of=/dev/zero of=/dev/my-disk-here)
ZFS stores the meta data at the end of the disk too. You should have used zpool destroy to remove the pool information from the disks.

I recreated the pool again (same name as the old pooll) on Ubuntu focal but this time I used raidz2 instead of raidz1
It's not entirely the same, the old pool used the whole disk and your new pool uses partitions on those disks. This probably means the old meta data hasn't been overwritten because most partitions don't go all the way to the end of the disk.

You could have a look at the partitions on the disk, there's probably a bit of free space at the end of the disk, you can try zeroing that free space all the way to the end of the disk.
 
I checked (using fdisk in FreeBSD) da1 yesterday and there's only 1 partition. I'll check the rest later.
------------
So what are my options if all the disk only have 1 partition? Zero out the whole disk for all 8 disks?
 
(using fdisk in FreeBSD)
Please stop using fdisk(8), it can only deal with MBR partitioned disks (your disks are GPT). Use gpart show.

Zero out the whole disk for all 8 disks?
Only the bit after the partition to the end of the disk. That's where the 'old' zpool meta data was stored. The meta data is stored at the beginning of the disk (or partition) and a copy is stored at the end of the disk (or partition).

Because your new pool uses partitions, the bit at the end of the disk never got overwritten with the new zpool meta data (because that's stored at the end of the partition).
 
Can you please tell me how to zero out the free space at the end of the disk? Thank you.

-----------------

here's the results of gpart show da1 (the results is the same for the other disks)

Code:
gpart show da1
=>         34  11721045101  da1  GPT  (5.5T)
           34         2014       - free -  (1.0M)
         2048  11721025536    1  apple-zfs  (5.5T)
  11721027584        16384    9  solaris-reserved  (8.0M)
  11721043968         1167       - free -  (584K)
 
Back up your pool and use zpool labelclear to get rid of all identifiers (both beginning and end) of the old zpools. recreate the pool and restore the backup.

And yes, next time, use zpool destroy, not dd, to destroy the old pool. ZFS has redundancy on both sides of the disk specifically so that an accidental dd run doesn't destroy all the metadata.
 
Back
Top