Hi all,
So, my ZFS raidz pool is reporting that it is has "insufficient replicas" because the raidz1 has "corrupted data." The four drives that make up the array are online and they all pass SMART. There are no faulted drives.
This happened after a power outage in which the server had a perfectly clean shutdown from UPS signal. Not being a complete twit, I have a backup so no panic. But, my OCD means that I really really want to understand what happened here.
My utterly unfounded guess is that the data on the drives is intact and fine but that the vdev label for the array is corrupt. I am curious if there is a work around to recreate the raidz vdev and keep the data. I can't see away forward at this point. Suggestions would be much appreciated.
FreeBSD 8.2-STABLE with ZFS v4 and ZPOOL v.15
Configured with a four disk raidz: ada0, ada1, ada2, ada3.
After exporting the pool:
More generally:
From /var/log/messages:
I understand, perhaps incorrectly, that this kind of trouble can result from a bad ZFS disk label. If this were the trouble, then on one of the disks surely we would expect to see the "failed to unpack label" message as part of the output from zdb -l for one or more of the drives. But, as near as I can tell, all of my labels are AOK. Output of zdb -l for each of the four drives follows in the next post.
Any suggestions?
Thanks!
lev.
So, my ZFS raidz pool is reporting that it is has "insufficient replicas" because the raidz1 has "corrupted data." The four drives that make up the array are online and they all pass SMART. There are no faulted drives.
This happened after a power outage in which the server had a perfectly clean shutdown from UPS signal. Not being a complete twit, I have a backup so no panic. But, my OCD means that I really really want to understand what happened here.
My utterly unfounded guess is that the data on the drives is intact and fine but that the vdev label for the array is corrupt. I am curious if there is a work around to recreate the raidz vdev and keep the data. I can't see away forward at this point. Suggestions would be much appreciated.
FreeBSD 8.2-STABLE with ZFS v4 and ZPOOL v.15
Configured with a four disk raidz: ada0, ada1, ada2, ada3.
After exporting the pool:
Code:
[root@neruda:~]# zpool import mpool
cannot import 'mpool': invalid vdev configuration
More generally:
Code:
[root@neruda:~]# zpool import
pool: mpool
id: 3532879862857622473
state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:
mpool UNAVAIL insufficient replicas
raidz1 UNAVAIL corrupted data
ada2 ONLINE
ada3 ONLINE
ada1 ONLINE
ada0 ONLINE
From /var/log/messages:
Code:
neruda root: ZFS: vdev failure, zpool=mpool type=vdev.bad_label
I understand, perhaps incorrectly, that this kind of trouble can result from a bad ZFS disk label. If this were the trouble, then on one of the disks surely we would expect to see the "failed to unpack label" message as part of the output from zdb -l for one or more of the drives. But, as near as I can tell, all of my labels are AOK. Output of zdb -l for each of the four drives follows in the next post.
Any suggestions?
Thanks!
lev.