Ok, so I'm using SATA in an SAS array with expanders. I know (well I do now), that's not a good idea. See this for background info why: http://garrett.damore.org/2010/08/why-sas-sata-is-not-such-great-idea.html
But I was running in this drive configuration for 2 years without any problems at all, no drives went down either. I use SATA's in 3-way mirror vdevs to offset the price, performance, redundancy, failure risk vs SAS. If I was using SAS, I would certainly not be using a 3-way mirror, due to the cost.
Anyway, it turns out that the reason I was getting away with it was due to the following settings in /boot/loader.conf :
So after upgrading to FreeBSD 10.1 and having major problems on my pools; SCSI /CAM errors and drives being REMOVED by themselves. After almost a week of debugging, I discover that the oid (sysctl variable) vfs.zfs.vdev.max_pending is no longer used in FreeBSD 10.1 RELEASE. It's due to a change in OpenZFS I believe.
So, please, please somebody tell me what is the equivalent of vfs.zfs.vdev.min_pending in the latest ZFS version? I haven't slept properly in over a week. Nor have I been able to do a backup, since high load is causing drives to temporarily die. Amazingly it can survive a full working day in production... just.
It's a bit too late to go back to FreeBSD 8.4 now. I could, but who would want to do that. And I would have to move 4TB to new pools created with an older zpool version. That takes about 2 days to send/recv.
But I was running in this drive configuration for 2 years without any problems at all, no drives went down either. I use SATA's in 3-way mirror vdevs to offset the price, performance, redundancy, failure risk vs SAS. If I was using SAS, I would certainly not be using a 3-way mirror, due to the cost.
Anyway, it turns out that the reason I was getting away with it was due to the following settings in /boot/loader.conf :
Code:
# Change I/O queue settings to play nice with SATA NCQ and
# other storage controller features.
vfs.zfs.vdev.min_pending="1"
vfs.zfs.vdev.max_pending="1"
So, please, please somebody tell me what is the equivalent of vfs.zfs.vdev.min_pending in the latest ZFS version? I haven't slept properly in over a week. Nor have I been able to do a backup, since high load is causing drives to temporarily die. Amazingly it can survive a full working day in production... just.
It's a bit too late to go back to FreeBSD 8.4 now. I could, but who would want to do that. And I would have to move 4TB to new pools created with an older zpool version. That takes about 2 days to send/recv.
Last edited: