Recovering from a double drive failure in a raidz1 help

Based on the type of failures, I'm hoping all is not lost. I have three drives in a raidz1. Drive 0's controller decided to quit working. A bunch of GB of data was written to the array while it was in a degraded state. A few hours later Drive 1 decided it will click and screech going forward. I have gotten Drive 0 to a functioning state and am hoping I can get the array functioning to get some data that wasn't backed up; I don't care about the data that was written while it was in a degraded state.

Here is the current status zpool reports:
Code:
  pool: raid
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scan: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        raid                      UNAVAIL      0     0     0
          raidz1-0                UNAVAIL      0     0     0
            126397095549615168    REMOVED      0     0     0  was /dev/ad4
            10679210838851715372  UNAVAIL      0     0     0  was /dev/ad6
            ada1                  ONLINE       0     0     0
The drive marked as REMOVED is Drive 0 and the drive marked as UNAVAIL is Drive 1 that I can't just reattach. I've tried to get it to mount and also clear the error. I haven't tried to do the Detach command for Drive 1 since my last ditch effort will be to cannibalize a drive to see if I can resurrect Drive 1. Is there a way to force Drive 0 back into the array and accept any loss of data?
 
You already tried onlineing like the message instructs, using the device node the removed drive is now known as? E.g.
# zpool online raid /dev/ad4

If that doesn't work, my experience with removed drives that won't reattach is that they will after a reboot and subsequent re-import of the pool. But I never tried it in a pool that has insufficient replicas.
 
If you boot to single-user mode (so the pool isn't imported), does the following work:
Code:
# zpool import -F -d /dev -o readonly=on raid
  • -F tells it to import in recovery mode, where it rolls back a few transactions (which should be enough to go "back-in-time" to before the degraded state)
  • -d /dev tells it to manually search /dev/* for disks with ZFS metadata on them, instead of relying on the zpool.cache file. This helps when disk names change.
  • -o readonly=yes tells it to mount the pool read-only, which can work around some corruption issues
 
Back
Top