Hi,
I moved four of the Dell MD1000's I have from linux and LSI hardware RAID tofreebsd FreeBSD 8.3 and software RAID using ZFS to take advantage of compression. This storage is used for backup purposes. It has been very stable so far. I have a few questions on potential issues if I start using ZFS on a larger scale in future. Hopefully somebody in the forum might be able to help me with these questions.
Right now some of the raidz2 sets cross enclosures. (half on one, half on the next JBOD). What happens if one enclosure fails due to a power supply issue or SAS expander failure? The raidz2 sets should be marked as faulty I assume. But when the JBOD comes back after solving the hardware issues, does ZFS detect that the offline drives have come back or does a manual device scan with zpool import fix the issue? The same issues could happen when thinking of a larger scale like having a few SAS switches and JBODs hooked up to the switches and the switches or JBODs fail due to hardware problems or a simple thing like PDU failures. How resilient is ZFS in FreeBSD to these kinds of temporary hardware failures?
Thanks
I moved four of the Dell MD1000's I have from linux and LSI hardware RAID to
Right now some of the raidz2 sets cross enclosures. (half on one, half on the next JBOD). What happens if one enclosure fails due to a power supply issue or SAS expander failure? The raidz2 sets should be marked as faulty I assume. But when the JBOD comes back after solving the hardware issues, does ZFS detect that the offline drives have come back or does a manual device scan with zpool import fix the issue? The same issues could happen when thinking of a larger scale like having a few SAS switches and JBODs hooked up to the switches and the switches or JBODs fail due to hardware problems or a simple thing like PDU failures. How resilient is ZFS in FreeBSD to these kinds of temporary hardware failures?
Thanks