ZFS zpool cannot import after linux

An SSD I used in FreeBSD for long time for bhyve guests. One day I connected it to a Linux box (Debian 10).
After importing/exporting zpool in Linux I cannot use this disk in FreeBSD anymore:
Code:
# zpool import
   pool: zoo
     id: 3177752991675440138
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
    devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
config:

    zoo                     UNAVAIL  insufficient replicas
      17532632771465056131  UNAVAIL  cannot open
Tried to clear labels:
Code:
# zpool labelclear /dev/ada1
failed to open /dev/ada1: Operation not permitted
As you can see the labels contain wrong device node /dev/sdc:
Code:
% zdb -l /dev/ada1
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
    version: 5000
    name: 'zoo'
    state: 1
    txg: 18618516
    pool_guid: 3177752991675440138
    errata: 0
    hostname: 'aspen32'
    top_guid: 17532632771465056131
    guid: 17532632771465056131
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 17532632771465056131
        path: '/dev/sdc'
        whole_disk: 1
        metaslab_array: 37
        metaslab_shift: 31
        ashift: 12
        asize: 256055705600
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'zoo'
    state: 1
    txg: 18618516
    pool_guid: 3177752991675440138
    errata: 0
    hostname: 'aspen32'
    top_guid: 17532632771465056131
    guid: 17532632771465056131
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 17532632771465056131
        path: '/dev/sdc'
        whole_disk: 1
        metaslab_array: 37
        metaslab_shift: 31
        ashift: 12
        asize: 256055705600
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 3
------------------------------------
    version: 5000
    name: 'zoo'
    state: 1
    txg: 18618516
    pool_guid: 3177752991675440138
    errata: 0
    hostname: 'aspen32'
    top_guid: 17532632771465056131
    guid: 17532632771465056131
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 17532632771465056131
        path: '/dev/sdc'
        whole_disk: 1
        metaslab_array: 37
        metaslab_shift: 31
        ashift: 12
        asize: 256055705600
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
Thanks for ideas!
 
Sounds like it needs to know where the device files are. I would try re-importing with the -d option.

Code:
zpool import -d /dev zoo
 
I have the same error message reusing a SSD that was in a test machine. I just want to wipe ZFS completely from this SSD and start again but there is this permanent ghost of a previous 'zroot' pool that just can't be exorcised.

I have tried wiping partitions with gpart destroy
Clearing zfs labels with zpool labelclear
newfs -E /dev/ada0
dd'ing the first and last 1MB on the drive

Nothing seems to work

Is there a way of destroying an 'UNAVAIL' pool and device using the id number?
The 'UNAVAIL' ghost pool is called 'zroot' it has an id of 7735432697577356694
zroot has an 'UNAVAIL' device with the id of 7437532231762765140

Is it safe to dd the entire SSD with zeros to wipe it? This machine has four SAS drives connected that also had ZFS on from other machines but they are no longer grumbling since I zeroed them with dd.
 
In a state of frustration I used dd to zero the entire SSD...

Code:
dd status=progress bs=1m if=/dev/zero of=/dev/ada0

It failed to get rid of the ghost zroot. zpool import still finds the ghost. I don't really want to replace the SSD but what else is left that I can do?
 
I found a workaround...

Installing OpenIndiana Hipster 20180427 on to the SSD with my ghost 'zroot' pool effectively destroyed the ghost pool whereas any number of FreeBSD reinstalls could not.

Installing FreeBSD over OpenIndiana gives me a nice clean ZFS install.
I wonder if installing OpenIndiana and FreeBSD from DVD-R media has made a difference here. Could installing FreeBSD from USB flash be the reason the ghost zpool kept appearing?
 
In a state of frustration I used dd to zero the entire SSD...
It failed to get rid of the ghost zroot. zpool import still finds the ghost.
I really don't understand how zeroing the disk didn't help. The only explanation I envision that the second copy of GPT (at the end of disk) somehow survived (the last block remained in the SSD's buffer and never been transferred to flash.
 
I really don't understand how zeroing the disk didn't help. The only explanation I envision that the second copy of GPT (at the end of disk) somehow survived (the last block remained in the SSD's buffer and never been transferred to flash.
I just experienced something like this yesterday as well. It was during a flury of multiple installs; FreeBSD, OpenIndiana, and Omniosce. Back and forth. The fix in my case was to use the FreeBSD install image on USB stick, go into a shell and use gpart to sequentially delete the partitions (i3...i2...i1) and then (and only then) destroy the GEOM. Trying to just destroy the GEOM threw an error that it "was busy". Partitions had to die first. It was like re-living when your bios changes wouldn't be recognized unless you turned off your computer for a few minutes. Or the CMOS battery was almost depleted. BTW, the always great FreeBSD Handbook Sections 19.3 and 19.4 were really helpful here. (I'm also staying the hell away from UEFI on my old-ish machines.)
 
Back
Top