Phantom zpool / zfs label

I seem to have a phantom zfs label from an old zpool on my 8.3-RELEASE-p4 system that just won't go away. `zpool import` shows the following:
Code:
   pool: Big10
     id: 8508160011089780154
  state: UNAVAIL
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
 config:

        Big10                     UNAVAIL  insufficient replicas
          mirror-1                UNAVAIL  insufficient replicas
            13749973325603254895  UNAVAIL  cannot open
            14233028929160807345  UNAVAIL  cannot open

After a big of digging, I tried `zdb -e 8508160011089780154` and got this:
Code:
Configuration for import:
        version: 14
        pool_guid: 8508160011089780154
        name: 'Big10'
        state: 0
        hostid: 1141574414
        hostname: 'freenas'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 8508160011089780154
            children[0]:
                type: 'missing'
                id: 0
                guid: 0
            children[1]:
                type: 'mirror'
                id: 1
                guid: 10191749896870773769
                metaslab_array: 203
                metaslab_shift: 31
                ashift: 9
                asize: 2000394125312
                is_log: 0
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 13749973325603254895
                    path: '/dev/ada2'
                    whole_disk: 0
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 14233028929160807345
                    whole_disk: 0
                    path: '/dev/dsk/da3'
zdb: can't open 'Big10': No such file or directory

Since I don't have an ada2, I'm thinking the culprit is da3. The drives are partitioned to allow for swap:
Code:
# gpart show /dev/da3    
=>        34  3907029101  da3  GPT  (1.8T)
          34          94       - free -  (47k)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902834703    2  freebsd-zfs  (1.8T)

`zdb -l /dev/da3p2` shows 4 proper labels for an existing, mounted zpool. `zdb -l /dev/da3` shows a single label (LABEL 2) with the old pool:
Code:
# zdb -l /dev/da3                    
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 14
    name: 'Big10'
    state: 0
    txg: 968139
    pool_guid: 8508160011089780154
    hostid: 1141574414
    hostname: 'freenas.wcubed.net'
    top_guid: 10191749896870773769
    guid: 14233028929160807345
    vdev_tree:
        type: 'mirror'
        id: 1
        guid: 10191749896870773769
        metaslab_array: 203
        metaslab_shift: 31
        ashift: 9
        asize: 2000394125312
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 13749973325603254895
            path: '/dev/ada2'
            whole_disk: 0
        children[1]:
            type: 'disk'
            id: 1
            guid: 14233028929160807345
            path: '/dev/ada3'
            whole_disk: 0
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3

How can I get rid of it?

Thanks!
 
For future reference, this command should clear a label:
# zpool labelclear /dev/da3
Make sure you apply it to the correct device.
 
It's been a few months so I'm not completely sure, but I believe I did try zfs labelclear to no avail. I believe that I ended up zeroing the first and last few megs of the drive with dd.
 
Back
Top