I'm at my wits end here, any thoughts or advice would be greatly appreciated - no matter how far fetched!
This is under 8.2-STABLE, I can provide whatever output is useful.
I'm running into some performance problems with two SATA drives. They are connected via an LSI 1068 controller. There are a total of 7 drives connected to this HBA, 5 of them perform normally (100-120 MB/s seq read/write) but two of them perform very poorly.
When running
[cmd=]dd bs=1m of=/dev/null if=/dev/da6[/cmd]
in gstat I see
High busy, high latency, low throughput. The other drives (da1 for example) behave normally:
I pulled both drives and connected them to my windows 7 laptop via eSATA and benchmarked them, they performed normally with around 100mb/s sequential read and write. I have checked the SMART attributes, nothing out of the ordinary. I have tried swapping them around in the hotswap bay and the physical connections to the HBA to no avail. I have tried everything I can think of, I need new ideas no matter how random.
If the HBA was the problem, whether hardware or software, then the other 5 drives would exhibit the same problems, right?
I have seen only two things that pique my curiosity but I don't know if they matter.
The first is the GEOM output for those two drives. There is a "Mode" value, they have r0w0e0 where all the other drives have r1w1e1. However this could be because they are not part of a zpool and all the other drives are?
The second is the [cmd=]camcontrol devlist[/cmd] output:
The drives that perform normally are (da,pass) but the two behaving oddly are (pass,da). Does that mean anything?
This is under 8.2-STABLE, I can provide whatever output is useful.
I'm running into some performance problems with two SATA drives. They are connected via an LSI 1068 controller. There are a total of 7 drives connected to this HBA, 5 of them perform normally (100-120 MB/s seq read/write) but two of them perform very poorly.
When running
[cmd=]dd bs=1m of=/dev/null if=/dev/da6[/cmd]
in gstat I see
Code:
dT: 5.505s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 122 122 15626 8.1 0 0 0.0 99.2| da6
High busy, high latency, low throughput. The other drives (da1 for example) behave normally:
Code:
dT: 5.505s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 742 742 94923 1.3 0 0 0.0 96.6| da1
I pulled both drives and connected them to my windows 7 laptop via eSATA and benchmarked them, they performed normally with around 100mb/s sequential read and write. I have checked the SMART attributes, nothing out of the ordinary. I have tried swapping them around in the hotswap bay and the physical connections to the HBA to no avail. I have tried everything I can think of, I need new ideas no matter how random.
If the HBA was the problem, whether hardware or software, then the other 5 drives would exhibit the same problems, right?
I have seen only two things that pique my curiosity but I don't know if they matter.
The first is the GEOM output for those two drives. There is a "Mode" value, they have r0w0e0 where all the other drives have r1w1e1. However this could be because they are not part of a zpool and all the other drives are?
The second is the [cmd=]camcontrol devlist[/cmd] output:
Code:
<ATA SAMSUNG HD154UI 1118> at scbus0 target 0 lun 0 (da0,pass0)
<ATA SAMSUNG HD154UI 1118> at scbus0 target 1 lun 0 (da1,pass1)
<ATA SAMSUNG HD154UI 1118> at scbus0 target 3 lun 0 (pass6,da6)
<ATA SAMSUNG HD154UI 1118> at scbus0 target 4 lun 0 (pass2,da2)
<ATA ST31500541AS CC34> at scbus0 target 5 lun 0 (da3,pass3)
<ATA Hitachi HDS5C301 A580> at scbus0 target 6 lun 0 (da4,pass4)
<ATA Hitachi HDS5C301 A580> at scbus0 target 7 lun 0 (da5,pass5)
The drives that perform normally are (da,pass) but the two behaving oddly are (pass,da). Does that mean anything?