I seem to have run into an issue with a pool. In short: I moved all data off a pool and destroyed it. Then I added a single slice to each drive, and labeled the slices use glabel. Then I created 3, four device raidz vdevs. All is well so far, so I copy all the data back to the new, nicely setup pool. After a reboot, I don't know what the hell happened, but pool is now showing as unavailable, the first four disks don't want to cooperate, and
So doing a
For some reason I now have duplicate pools named storage, the first one that says it's comprised of da{0,4,5,2} is the missing drives from the 'real' pool, except it should be slices (da0s1) not the entire disk, and the pool was originally created with the respective glabels, not device names. So I searched and read zfs mailing lists for a few hours now and I'm at a loss. It seems that the zfs/zpool 'labels' (??) are corrupted on the first raidz vdev. Running
To be continued... ran out of characters...
# zpool import storage
gives the following error...
Code:
# zpool import storage
cannot import 'storage': more than one matching pool
import by numeric ID instead
So doing a
# zpool import
gives me this:
Code:
pool: storage
id: 2169223940234886392
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
storage ONLINE
raidz1 ONLINE
da0 ONLINE
da4 ONLINE
da5 ONLINE
da2 ONLINE
pool: storage
id: 4935707693171446193
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
storage UNAVAIL insufficient replicas
raidz1 ONLINE
label/storage_02_1 ONLINE
label/storage_02_2 ONLINE
label/storage_02_3 ONLINE
label/storage_02_4 ONLINE
raidz1 ONLINE
label/storage_03_1 ONLINE
label/storage_03_2 ONLINE
label/storage_03_3 ONLINE
label/storage_03_4 ONLINE
For some reason I now have duplicate pools named storage, the first one that says it's comprised of da{0,4,5,2} is the missing drives from the 'real' pool, except it should be slices (da0s1) not the entire disk, and the pool was originally created with the respective glabels, not device names. So I searched and read zfs mailing lists for a few hours now and I'm at a loss. It seems that the zfs/zpool 'labels' (??) are corrupted on the first raidz vdev. Running
# zdb -l /dev/da0s1
(one of the non-cooperating disks) gives the following:
Code:
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='storage'
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname='unset'
top_guid=17696126969775704657
guid=2203261993905846015
vdev_tree
type='raidz'
id=0
guid=17696126969775704657
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=2000401989632
is_log=0
children[0]
type='disk'
id=0
guid=2203261993905846015
path='/dev/label/storage_01_1'
whole_disk=0
DTL=31
children[1]
type='disk'
id=1
guid=8995448228292161600
path='/dev/label/storage_01_2'
whole_disk=0
DTL=30
children[2]
type='disk'
id=2
guid=5590467752431399831
path='/dev/label/storage_01_3'
whole_disk=0
DTL=29
children[3]
type='disk'
id=3
guid=4709121270437373818
path='/dev/label/storage_01_4'
whole_disk=0
DTL=28
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name='storage'
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname='unset'
top_guid=17696126969775704657
guid=2203261993905846015
vdev_tree
type='raidz'
id=0
guid=17696126969775704657
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=2000401989632
is_log=0
children[0]
type='disk'
id=0
guid=2203261993905846015
path='/dev/label/storage_01_1'
whole_disk=0
DTL=31
children[1]
type='disk'
id=1
guid=8995448228292161600
path='/dev/label/storage_01_2'
whole_disk=0
DTL=30
children[2]
type='disk'
id=2
guid=5590467752431399831
path='/dev/label/storage_01_3'
whole_disk=0
DTL=29
children[3]
type='disk'
id=3
guid=4709121270437373818
path='/dev/label/storage_01_4'
whole_disk=0
DTL=28
To be continued... ran out of characters...