ZFS raid-z drive failure issue

I have a four drive GPT RAID-Z setup. The four drives are connecting through a RAID controller, each set up as a RAID0 virtual drive. Obviously I didn't think this through all to well because I just had a drive failure and I'm running into an issue trying to get the replacement drive back in and functioning.

I popped in a new drive, set up the VD in the RAID controller and tried to boot but the BIOS tells me there are no boot devices. So, I decided to plug the new drive into a SATA port, boot into FreeBSD, and use gpart to create boot partition etc. That all went fine. I powered down, moved the new drive back to the RAID controller and powered back on. Now the virtual drive configuration in the RAID controller was missing - I'm assuming because I wrote data to the drive via SATA.

I'm not quite sure what I can do to fix this. Any ideas?
 
It just hit me, the drive that failed was connected to port 0 on the RAID controller. I did not have any of the 4 drives set to be the boot device in the RAID BIOS so I'm assuming it just picked drive 0. I set the boot device to be one of the 3 good drives (they still have boot blocks) and it took right off :e

Now I get to see how this thing rebuilds /crosses fingers.
 
Just hope that absolutely nothing goes wrong with all other drives during the rebuild phase, I tend to be scared of single-redundancy schemes because of that, especially with larger drives... rebuilding a 3 TB drive in a RAID-Z2 gives me a bit more peace of mind ;)
 
Yeah, for that reason I stick to 1 TB drives. I've seen a few rebuilds fail throughout my work. That and I buy drives from two different sources because I've seen batches of drives fail withing a few weeks of each other.

I had about 1 TB of data and it rebuilt in about a half hour. All is good :e
 
Back
Top