ZFS Zpool import failing after server crash

Status
Not open for further replies.
Hi folks ,

Recently we had network maintenance where underlying switches connected to the node where rebooted , due to this one of the node in the cluster got cpu panic , crashed and eventually got rebooted.After the reboot we tried to bring up the services and found that we have some problem with one of the zpool .

Code:
# zpool status lasoracle-dev_zpool 
cannot open 'lasoracle-dev_zpool': no such pool

Code:
# zpool status 
pool: rootpool 
state: ONLINE 
scan: resilvered 23.1G in 0h36m with 0 errors on Mon Oct 7 17:42:46 2013 
config: 

NAME STATE READ WRITE CKSUM 
rootpool ONLINE 0 0 0 
mirror-0 ONLINE 0 0 0 
c0t5000CCA02143970Cd0s0 ONLINE 0 0 0 
c0t5000CCA012D1BC1Cd0s0 ONLINE 0 0 0 
c0t5000CCA012D22E60d0s0 ONLINE 0 0 0 
spares 
c0t5000CCA012D2625Cd0s0 AVAIL 

errors: No known data errors
Code:
# zpool list 
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT 
rootpool 556G 31.8G 524G 5% ONLINE –

The problematic zpool is lasoracle-dev_zpool

Below is the output From the old documentation before the server crash

Code:
 # zpool status lasoracle-dev_zpool 
pool: lasoracle-dev_zpool 
state: ONLINE 
scan: none requested 
config: 

NAME STATE READ WRITE CKSUM 
lasoracle-dev_zpool ONLINE 0 0 0 
/dev/vx/dmp/st2540-0_15s2 ONLINE 0 0 0 
/dev/vx/dmp/st2540-0_17s2 ONLINE 0 0 0 
/dev/vx/dmp/st2540-0_1s2 ONLINE 0 0 0


When we tried to do import we got this:
Code:
# zpool status lasoracle-dev_zpool 
cannot open 'lasoracle-dev_zpool': no such pool
# zpool import 
pool: apache_zpool 
id: 2899374187356876317 
state: ONLINE 
status: The pool was last accessed by another system. 
action: The pool can be imported using its name or numeric identifier and 
the '-f' flag. 
see: [URL]http://www.sun.com/msg/ZFS-8000-EY[/URL] 
config: 

apache_zpool ONLINE 
c15t3d203s2 ONLINE 

pool: lasoracle-dev_zpool 
id: 9815117877345004397 
state: FAULTED 
status: The pool metadata is corrupted. 
action: The pool cannot be imported due to damaged devices or data. 
The pool may be active on another system, but can be imported using 
the '-f' flag. 
see: [URL]http://www.sun.com/msg/ZFS-8000-72[/URL] 
config: 

lasoracle-dev_zpool FAULTED corrupted data 
st2540-0_15s2 ONLINE 
st2540-0_17s2 ONLINE 

pool: lasoracle-prod_zpool 
id: 6782994953300400143 
state: ONLINE 
status: The pool was last accessed by another system. 
action: The pool can be imported using its name or numeric identifier and 
the '-f' flag. 
see: [URL]http://www.sun.com/msg/ZFS-8000-EY[/URL] 
config: 

lasoracle-prod_zpool ONLINE 
mirror-0 ONLINE 
st2540-0_3s2 ONLINE 
st2540-0_5s2 ONLINE 
mirror-1 ONLINE 
st2540-0_7s2 ONLINE 
st2540-0_9s2 ONLINE 
mirror-2 ONLINE 
st2540-0_11s2 ONLINE 
st2540-0_13s2 ONLINE

We also tried to import with -f
Code:
# zpool import lasoracle-dev_zpool 
cannot import 'lasoracle-dev_zpool': I/O error 
Destroy and re-create the pool from 
a backup source.
Code:
# zpool import -F lasoracle-dev_zpool 
cannot import 'lasoracle-dev_zpool': I/O error 
Destroy and re-create the pool from 
a backup source.

We also asked the support person and as per them on of the device zpool label is missing due to which import is not happening but I would still want to confirm with experts here J


Code:
bash-3.2# zdb -l /dev/vx/rdmp/st2540-0_1s2

--------------------------------------------
LABEL 0
--------------------------------------------
    version: 29
    state: 3
    guid: 15743663041286833375
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 29
    state: 3
    guid: 15743663041286833375
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 29
    state: 3
    guid: 15743663041286833375
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 29
    state: 3
    guid: 15743663041286833375

Below is the o/p of other disks under same pool

Code:
bash-3.2# zdb -l /dev/vx/rdmp/st2540-0_15s2

--------------------------------------------
LABEL 0
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 18241282699049948857
    guid: 18241282699049948857
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 18241282699049948857
        path: '/dev/vx/dmp/st2540-0_15s2'
        whole_disk: 0
        metaslab_array: 33
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 18241282699049948857
    guid: 18241282699049948857
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 18241282699049948857
        path: '/dev/vx/dmp/st2540-0_15s2'
        whole_disk: 0
        metaslab_array: 33
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 18241282699049948857
    guid: 18241282699049948857
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 18241282699049948857
        path: '/dev/vx/dmp/st2540-0_15s2'
        whole_disk: 0
        metaslab_array: 33
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 18241282699049948857
    guid: 18241282699049948857
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 18241282699049948857
        path: '/dev/vx/dmp/st2540-0_15s2'
        whole_disk: 0
        metaslab_array: 33
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
        create_txg: 4
Code:
bash-3.2#  zdb -l /dev/vx/rdmp/st2540-0_17s2
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 2671360783932084730
    guid: 2671360783932084730
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 1
        guid: 2671360783932084730
        path: '/dev/vx/dmp/st2540-0_17s2'
        whole_disk: 0
        metaslab_array: 30
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 2671360783932084730
    guid: 2671360783932084730
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 1
        guid: 2671360783932084730
        path: '/dev/vx/dmp/st2540-0_17s2'
        whole_disk: 0
        metaslab_array: 30
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 2671360783932084730
    guid: 2671360783932084730
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 1
        guid: 2671360783932084730
        path: '/dev/vx/dmp/st2540-0_17s2'
        whole_disk: 0
        metaslab_array: 30
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
       create_txg: 4
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 29
    name: 'lasoracle-dev_zpool'
    state: 0
    txg: 3618500
    pool_guid: 9815117877345004397
    hostid: 2247221516
    hostname: 'lasoracle2'
    top_guid: 2671360783932084730
    guid: 2671360783932084730
    vdev_children: 3
    vdev_tree:
        type: 'disk'
        id: 1
        guid: 2671360783932084730
        path: '/dev/vx/dmp/st2540-0_17s2'
        whole_disk: 0
        metaslab_array: 30
        metaslab_shift: 31
        ashift: 9
        asize: 298481614848
        is_log: 0
        create_txg: 4[/CMD]

Questions : Is it not possible to import the zpool without the label ? How to find what happened to label ,how the label is missing? 
Anyways to re-create the label without data loss?
Any other information that will help to bring back the pool will be very helpful and I'm not a sysadmin :( 

Thanks
Amar
 
Status
Not open for further replies.
Back
Top