Based on the type of failures, I'm hoping all is not lost. I have three drives in a raidz1. Drive 0's controller decided to quit working. A bunch of GB of data was written to the array while it was in a degraded state. A few hours later Drive 1 decided it will click and screech going forward. I have gotten Drive 0 to a functioning state and am hoping I can get the array functioning to get some data that wasn't backed up; I don't care about the data that was written while it was in a degraded state.
Here is the current status zpool reports:
The drive marked as REMOVED is Drive 0 and the drive marked as UNAVAIL is Drive 1 that I can't just reattach. I've tried to get it to mount and also clear the error. I haven't tried to do the Detach command for Drive 1 since my last ditch effort will be to cannibalize a drive to see if I can resurrect Drive 1. Is there a way to force Drive 0 back into the array and accept any loss of data?
Here is the current status zpool reports:
Code:
pool: raid
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
scan: none requested
config:
NAME STATE READ WRITE CKSUM
raid UNAVAIL 0 0 0
raidz1-0 UNAVAIL 0 0 0
126397095549615168 REMOVED 0 0 0 was /dev/ad4
10679210838851715372 UNAVAIL 0 0 0 was /dev/ad6
ada1 ONLINE 0 0 0