ZFS Raidz2 with hardware Raid controller, Do I need to enable/disable cache?

Hi all,

I'm using Dell H710 RAID controller, and using 8 SAS disk to create a raidz2 zpool. Since H710 does not support JBOD mode, each SAS disk is configured as a R0 disk.

My question is, for the RAID controller, how about the read/write cache? In the past I was told if using ZFS, we should disable read/write cache for the RAID controller.

The Dell H710 RAID controller only supports below modes:

Code:
Read:
No Read Ahead
Read Ahead
Adaptive Read Ahead

Write:
Write past
Write through

Which mode should I use for read and write to get the best performance?
 
A hardware RAID controller with onboard memory and BBU is great for ZFS! For write, I believe if you enable write past, you will get improved write performance and still be safe. And you don't lose ZFS features as self healing, because you've exported each disk as a single drive and created the actual RAID with ZFS.

For reading, I'm not really sure. Basically you cache reads both on the controller and in the ARC. Which is in my opinion a waste.
 
Hi all, Dell H710 mini comes with NV cache and battery, so does that mean I should use write pass to get better performance for SAS disk? And how about read? And in the future, if I use SSD, how about the read/write settings?
 
If the controller has NVRAM/battery backed up cache, then it should honor write commits that have been reported to ZFS.

That's the entire point of having NVRAM / battery back up.

In other words, you should be OK to leave the controller's caching turned on.
 
throAU said:
If the controller has NVRAM/battery backed up cache, then it should honor write commits that have been reported to ZFS.

That's the entire point of having NVRAM / battery back up.

In other words, you should be OK to leave the controller's caching turned on.

So do you mean that for both SSD and HDD, I should use

Read:
Read Ahead

Write:
Write back
 
I've used a 3Ware SATA RAID card with BBU with ZFS quite successfully in the past by exporting six drives as Single Disk units and creating a pool with those.

With a functioning BBU, the on-card cache RAM can be considered "safe storage" when disk cache flush commands are concerned so even those can be acknowledged right away. If you do not trust the BBU enough for that, you can normally configure the card to do flushes to the physical drives in such a scenario, even when a BBU is present.
Of course you should always flush to the platters when the BBU fails or is not present... a good RAID card will warn you quite clearly when you attempt to set a possible dangerous caching policy without a working BBU. My card would automatically change policies when the BBU status changes in any way (low charge, failure, etc.)

So yeah, write back cache settings are safe with a BBU present. The impact of setting read ahead could depend on your workload, I can imagine it could do useless reads that ZFS knows nothing about...
 
Back
Top