pool degraded, now faulted - possible recovery options?

My zraid was degraded because one of the three devices wasn't showing up correctly - FreeBSD detected all three devices, and SMART tests seemed to say that it was fine. zpool status said that the faulted device was previously ada2p2 - and gpart showed that it had several partitions, which it shouldn't have. The pool was originally made on FreeBSD 7 or 8, I had it running under "native" zfs for linux where it preformed quite well for a few months, before moving it back to FreeBSD 9, where I started having these problems. All disks should be entirely ZFS, I never did any partitioning to them - though this one that wasn't showing up may have been originally a Windows disk that I threw into the RAID when another drive had failed 8 months ago.

I just took out what I believed to be the faulty disk (I checked with zpool status, and it said it was degraded before, but still online with the disk removed), so I formatted the disk as a single partion of NTFS just to wipe it, so I could throw it back into the pool to resilver it. Only, ZFS now says I'm missing two disks from the pool. And this is where a warm sinking feeling overcomes me and I realize how stupid it was not to back up the entire pool before trying this.

So, I have to ask, is there anything I can do to recover from this? two of the disks should be intact, though FreeBSD insists one isn't - they all show up in dmesg, and report
Code:
SMART overall-health self-assessment test result: PASSED

I realize that I probably hosed the one with the NTFS, but it was a quick format, so the majority of the disk, if I understand correctly, should be intact.
 
warm sinking feeling...
Oh, that sums it up pretty well, been there. Not sure there's anything you can do if it's in FAULTED state, but let's wait for someone more knowledgeable on this subject. I believe you could've recovered when you detached that second drive, and attached it back, but since you wiped it completely with NTFS, it's gone.

Next time backup your most important data since zraid != backup.
 
The thing is I've tried many times to reattach this drive to the pool. Somehow it seems that I've formatted the wrong disk, despite zpool saying that that I had taken the correct disk out (I tried zpool status a good twenty times to make sure I had the correct disk disconnected, and it kept telling me it was infact still online.

Again, I'm just hoping that two of these disks are still intact, even if it takes a little trouble, so that I can resilver everything.
 
@warinthepocket

Destroy. Erase. Improve.

Use labels on the hard drives next time, to avoid pulling out the wrong one(s).

/Sebulon
 
I'll go out on a limb here, I'm a *little* determined to fix this. The disk that originally wasn't working seems to have data in place after 0x00004420, before this it seems that the EFI BIOS overwrote it? (see beginning of partition here http://pastebin.com/qcdHVERJ).

The healthy partition looks like this: http://pastebin.com/diRN2Szq in the beginning - could this be something as simple as fixing the beginning of the partition?

The drive that I quick-formatted with NTFS may be much more complicated to do anything with - I'm not sure how much data was written, and where. But it does seem that the data is intact at 0x00004400 and on a ways.

Could a [cmd=]gpart create[/cmd] possibly fix these partition beginnings? I don't know pretty much anything about the internals of ZFS, but I'm willing to snoop around :D
 
Back
Top