Hi,
I have setup a 12x1TB array in raid6 giving me approx 10TB virtual disk. I configured ZFS with a single volume on top of it. I know, lots of people will argue I should have used them is JBOD with raw disk and RAIDZ2, but I personnaly feel more confortable with a hardware solution than zfs since zfs is kinda new and the fbsd implementation is not really tested at large (please don't argue, I won't change my mind on this)
My question is in regards of the BBU Cache of the raid controller. I'm using a poweredge R510 and a H700 raid card with 512 megs of cache. The driver for it is mfi.
There is 14 disks in the server. The 12x1TB as a raid6 + ZFS and 2 SAS drive running in raid 1 for the operating system...
Both of my volumes (RAID1 and RAID6) have the same cache settings...
Is that optimal or should I use something else ?
I thought of using "enable" (Enable caching for both read and write I/O operations) for the raid1 and "writes" (Enable caching only for write I/O operations) for the raid6+zfs
I want absolutely no compromise on data integrity, but would like to optimize performance as much as possible, are my proposed settings better ? Should I stick with how it is currently ?
Any comments would be more then welcomed.
Thanks a lot in advance
I have setup a 12x1TB array in raid6 giving me approx 10TB virtual disk. I configured ZFS with a single volume on top of it. I know, lots of people will argue I should have used them is JBOD with raw disk and RAIDZ2, but I personnaly feel more confortable with a hardware solution than zfs since zfs is kinda new and the fbsd implementation is not really tested at large (please don't argue, I won't change my mind on this)
My question is in regards of the BBU Cache of the raid controller. I'm using a poweredge R510 and a H700 raid card with 512 megs of cache. The driver for it is mfi.
There is 14 disks in the server. The 12x1TB as a raid6 + ZFS and 2 SAS drive running in raid 1 for the operating system...
Code:
running #mfiutil cache 0
mfi0 volume mfid1 cache settings:
I/O caching: disabled
write caching: write-back
read ahead: adaptive
drive write cache: default
Both of my volumes (RAID1 and RAID6) have the same cache settings...
Is that optimal or should I use something else ?
I thought of using "enable" (Enable caching for both read and write I/O operations) for the raid1 and "writes" (Enable caching only for write I/O operations) for the raid6+zfs
I want absolutely no compromise on data integrity, but would like to optimize performance as much as possible, are my proposed settings better ? Should I stick with how it is currently ?
Any comments would be more then welcomed.
Thanks a lot in advance