ZFS Zpool I/O Error and no drive attached to the pool

Gpop

New Member


Messages: 1

#1
Hi Everyone,

I had my PSU die on me and now I'm facing issue with the zpool which I can't figure out.


[root@freenas ~]# zpool import
pool: gDisk
id: 4321208912538017444
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-72
config:

gDisk FAULTED corrupted data
raidz1-0 ONLINE
gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83 ONLINE
gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83 ONLINE
gptid/3a5b5d0b-943e-11e4-8c0e-a01d48c76648 ONLINE
gptid/bd2204b8-3024-11e4-beb2-6805ca1cb42a ONLINE


Doing a zpool import -F gDisk

cannot import 'gDisk': I/O error
Destroy and re-create the pool from
a backup source.


However, once I do this I get the following message with the same 4 numbers repeating over and over.


Nov 17 09:43:59 freenas ZFS: vdev state changed, pool_guid=4321208912538017444 vdev_guid=13109049489029127203

Nov 17 09:43:59 freenas ZFS: vdev state changed, pool_guid=4321208912538017444 vdev_guid=4774203770015519164

Nov 17 09:43:59 freenas ZFS: vdev state changed, pool_guid=4321208912538017444 vdev_guid=9019238602065831635

Nov 17 09:43:59 freenas ZFS: vdev state changed, pool_guid=4321208912538017444 vdev_guid=11673891713223961018

Nov 17 09:43:59 freenas ZFS: vdev state changed, pool_guid=4321208912538017444 vdev_guid=13109049489029127203

Nov 17 09:43:59 freenas ZFS: vdev state changed, pool_guid=4321208912538017444 vdev_guid=4774203770015519164

Nov 17 09:43:59 freenas ZFS: vdev state changed, pool_guid=4321208912538017444 vdev_guid=9019238602065831635


The output of the zdb -l /dev/ada0p2 gives me the following (the same for all drives)


Code:
------------------------------------
LABEL 0
------------------------------------
version: 5000
name: 'gDisk'
state: 0
txg: 34596237
pool_guid: 4321208912538017444
hostid: 2970101908
hostname: 'freenas.local'
top_guid: 16522616241267246162
guid: 11673891713223961018
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 16522616241267246162
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 12
asize: 11993762234368
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 11673891713223961018
path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 487
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 13109049489029127203
path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 486
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 4774203770015519164
path: '/dev/gptid/3a5b5d0b-943e-11e4-8c0e-a01d48c76648'
whole_disk: 1
DTL: 485
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 9019238602065831635
path: '/dev/gptid/bd2204b8-3024-11e4-beb2-6805ca1cb42a'
whole_disk: 1
DTL: 484
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
------------------------------------
LABEL 1
------------------------------------
version: 5000
name: 'gDisk'
state: 0
txg: 34596237
pool_guid: 4321208912538017444
hostid: 2970101908
hostname: 'freenas.local'
top_guid: 16522616241267246162
guid: 11673891713223961018
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 16522616241267246162
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 12
asize: 11993762234368
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 11673891713223961018
path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 487
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 13109049489029127203
path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 486
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 4774203770015519164
path: '/dev/gptid/3a5b5d0b-943e-11e4-8c0e-a01d48c76648'
whole_disk: 1
DTL: 485
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 9019238602065831635
path: '/dev/gptid/bd2204b8-3024-11e4-beb2-6805ca1cb42a'
whole_disk: 1
DTL: 484
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
------------------------------------
LABEL 2
------------------------------------
version: 5000
name: 'gDisk'
state: 0
txg: 34596237
pool_guid: 4321208912538017444
hostid: 2970101908
hostname: 'freenas.local'
top_guid: 16522616241267246162
guid: 11673891713223961018
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 16522616241267246162
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 12
asize: 11993762234368
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 11673891713223961018
path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 487
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 13109049489029127203
path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 486
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 4774203770015519164
path: '/dev/gptid/3a5b5d0b-943e-11e4-8c0e-a01d48c76648'
whole_disk: 1
DTL: 485
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 9019238602065831635
path: '/dev/gptid/bd2204b8-3024-11e4-beb2-6805ca1cb42a'
whole_disk: 1
DTL: 484
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
------------------------------------
LABEL 3
------------------------------------
version: 5000
name: 'gDisk'
state: 0
txg: 34596237
pool_guid: 4321208912538017444
hostid: 2970101908
hostname: 'freenas.local'
top_guid: 16522616241267246162
guid: 11673891713223961018
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 16522616241267246162
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 12
asize: 11993762234368
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 11673891713223961018
path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/db835a9b-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 487
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 13109049489029127203
path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
phys_path: '/dev/gptid/dc882ef3-665a-11e2-b37c-a0b3cce25a83'
whole_disk: 1
DTL: 486
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 4774203770015519164
path: '/dev/gptid/3a5b5d0b-943e-11e4-8c0e-a01d48c76648'
whole_disk: 1
DTL: 485
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 9019238602065831635
path: '/dev/gptid/bd2204b8-3024-11e4-beb2-6805ca1cb42a'
whole_disk: 1
DTL: 484
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data


Smartctl also doesn't send back any issue with any disk.

I tried all the obvious import command (-f -F -X Readonly ...) and they all lead to the same result. The GUI says that the drive are unused so I'm not sure what is going on here. Is there a way to relink the drives to the pool and access the data?

Any help would be more than welcome and I'm willing to try anything at this stage.
 

ralphbsz

Daemon

Thanks: 765
Messages: 1,289

#2
What messages about the disks are there in dmesg or /var/log/messages? Can you search /var/log/messages back to when the PSU actually failed, if there is nothing obvious recently? There must be a cause for the drives to have I/O errors or go offline, and the error messages will probably tell us.
 

ShelLuser

Son of Beastie

Thanks: 1,569
Messages: 3,411

#3
First of all: FreeNAS isn't FreeBSD, you're better off asking about this on the FreeNAS forums. Even so, this is the major risk (and downside) of ZFS: when the pool gets corrupted you'll lose all your filesystems vs. only one when using UFS.

However, one thing which you can do is trying to access the stuff read only. In my experience most problems surface whenever the system tries to write stuff to the pool which seems to be pretty constantly. The bad news is that readonly access also prevents file system maintenance (unlike with UFS) but on the upside you may be able to access (and secure) all your data.

# zpool import -o readonly=on -fnR /mnt gDisk. Followed by zfs list to check if it actually found any filesystems which you're going to have to mount manually.

Next time I wouldn't immediately try -F, instead use -Fn; that way you don't risk corrupting the pool even further (due to that constant writing).
 
Top