more questions about ZFS and degraded raidz

I'm wondering the best way to go about rebuilding/resilvering a ZFS filesystem. Heres my situation.

My raid card has a most of the different raid modes, raid1 0 5 and all that. The jbod mode it has it not exactly like what i expected....you can make a "jbod" with more than 1 disk. When i originally got it i emailed the manufacturer about it's jbod mode and possibly moving devices to other controllers. Long story short, they explained to me i can run disks in a "legacy" mode which is what i REALLY wanted to begin with technically

This brings me to the issue of failing the drives, formating them and resilving them one by one. The last thing i want to do is destroy my data which is working fine right now, but i WOULD like to switch the drives. Is it safe to just start the machine with one of the drives disconnected, then format said drive on another machine and "replace" the old spot in the array?


When i originally set it up i DID make sure to use glabel for the devices to keep from having any problems crop up if the device names change.

The thing that threw me off is that when i set the system up with empty drives they didn't get "seen" by the raid card because they weren't "initialized"

If you format a drive on another machine and connect it to the array it automatically shows up in this "legacy" mode. If i had known that from the start i might not have made this mistake. In all honesty it's running fine right now but i would like the ability to drop the drives in another machine or use another raid card in the future if i so desire. Thanks
 
Are these hot-swappable drives/bays? If so, then you don't need to reboot the ZFS host at all. Just pull the drive, reformat on other system, plug back in, glabel it, and do a "zpool replace".

If these are not hot-swappable, then you will need to reboot. If you have the time, I would do it like so:
  1. turn off ZFS host
  2. remove drive
  3. format drive in other system
  4. add drive to ZFS host
  5. boot ZFS host
  6. relabel drive
  7. do a "zpool replace" to add the drive back into the pool

Repeat for each drive, after the resilver has completed.

Since you use labelled drives, you should be able to boot with the drive missing, and still have a usable, degraded pool. That's the point of using the labels. :)
 
phoenix said:
Are these hot-swappable drives/bays? If so, then you don't need to reboot the ZFS host at all. Just pull the drive, reformat on other system, plug back in, glabel it, and do a "zpool replace".

they are supposed to be hot-swapable but i haven't TRIED yet because of fear that something could go wrong. technically they are hotswap drives with bays and what not....i've had trouble on another machine with hotswap drives in linux.

[*]relabel drive
is there a way to delete the glabels i already have set and/or reassign them to the new device?

Since you use labelled drives, you should be able to boot with the drive missing, and still have a usable, degraded pool. That's the point of using the labels. :)

yes, this was due to your help in other threads, i'm very happy i set it up this way, i was more worried about the pool not starting up automatically because it's degraded.


i'd like to do this but the last thing i want to do is loose 6 TB of movies and tv shows

thanks again for your help
 
Back
Top