Howdy,
I have a 3Ware 9550SX (4 port) raid controller and I'm using it with 4 1TB WD RE3 drives in an 8.0 box. I'm using ZFS with two mirrors in a pool. Performance seems OK on benchmarks, but not quite what I'd expect and the system gets quite laggy during writes - for example doing tab completion in the shell will have a slight delay of 1-3 seconds if there is a continuous write operation happening on the pool.
When I set this box up, I simply configured the 3Ware to pass through all the drives in "JBOD" mode. I just added 3DM and tw_cli to see if there's anything worth setting up in the controller. I see a few options there regarding write caching (the card has 256MB cache, but no BBU) and some various "performance" settings. This is where I get lost - I'm not sure that if I enable the write cache if the zfs layer will be aware of this and take that into consideration. I'm also not sure how to verify whether this controller is enabling NCQ for the drives - they support it, and I know when the controller is handling RAID tasks, it uses NCQ if the drives support it, but in JBOD mode, I'm finding conflicting info. "camcontrol" claims a queue depth of 254 on the drives which I believe exceeds the specs of SATA NCQ.
I'm now seeing some info suggesting that the controller should not be setup in JBOD mode, but to export each drive as a raid device...
Can anyone familiar with these controllers and ZFS comment on what the best practice is for this type of setup?
Thanks.
I have a 3Ware 9550SX (4 port) raid controller and I'm using it with 4 1TB WD RE3 drives in an 8.0 box. I'm using ZFS with two mirrors in a pool. Performance seems OK on benchmarks, but not quite what I'd expect and the system gets quite laggy during writes - for example doing tab completion in the shell will have a slight delay of 1-3 seconds if there is a continuous write operation happening on the pool.
When I set this box up, I simply configured the 3Ware to pass through all the drives in "JBOD" mode. I just added 3DM and tw_cli to see if there's anything worth setting up in the controller. I see a few options there regarding write caching (the card has 256MB cache, but no BBU) and some various "performance" settings. This is where I get lost - I'm not sure that if I enable the write cache if the zfs layer will be aware of this and take that into consideration. I'm also not sure how to verify whether this controller is enabling NCQ for the drives - they support it, and I know when the controller is handling RAID tasks, it uses NCQ if the drives support it, but in JBOD mode, I'm finding conflicting info. "camcontrol" claims a queue depth of 254 on the drives which I believe exceeds the specs of SATA NCQ.
I'm now seeing some info suggesting that the controller should not be setup in JBOD mode, but to export each drive as a raid device...
Can anyone familiar with these controllers and ZFS comment on what the best practice is for this type of setup?
Thanks.