ZFS mirror high busy one drive

I am testing out FreeBSD 9.1 ZFS as a replacement for a fileserver. My initial tests show around a 30-50% drop in performance over OmniOS. What is strange is I'm seeing 100% utilization on one disk and 50% on the other. I have tried creating the zpool with raw drives and 2048 aligned partitions. The output below is from the 2048 aligned setup. I have sync disabled on the zpool for testing and no overrides in /boot/loader.conf.

Code:
dmesg
CPU: Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz (2394.05-MHz K8-class CPU)
  Origin = "GenuineIntel"  Id = 0x6fb  Family = 6  Model = f  Stepping = 11
ahci0: <Intel ICH9 AHCI SATA controller> port 0x2408-0x240f,0x2414-0x2417,0x2400-0x2407,0x2410-0x2413,0x2020-0x203f mem 0xe1a21000-0xe1a217ff irq 21 at device 31.2 on pci0
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <ST2000DL003-9VT166 CC45> ATA-8 SATA 3.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: <ST2000DL003-9VT166 CC45> ATA-8 SATA 3.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)

Code:
zpool iostat 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         113G  1.70T      0    291      0  32.6M
tank         113G  1.70T      0    441      0  45.9M
tank         113G  1.70T      0    495      0  50.7M
tank         113G  1.70T      0    543      0  58.6M
tank         114G  1.70T      0    432      0  46.3M
tank         114G  1.70T      0    376      0  39.5M
tank         114G  1.70T      0    330      0  33.9M
tank         114G  1.70T      0    436      0  41.7M
tank         115G  1.70T      0    468      0  47.4M

Code:
gstat
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    1    487      0      0    0.0    485  56743   17.7   95.0| ada1p1
    0    487      0      0    0.0    484  56719   12.2   67.2| ada2p1

    1    392      0      0    0.0    391  44963   20.1   94.7| ada1p1
    0    294      0      0    0.0    292  32423   11.4   41.7| ada2p1

   10    399      0      0    0.0    399  50651   24.6  100.6| ada1p1
    0    467      0      0    0.0    467  59191   10.9   51.7| ada2p1

Code:
gpart show
=>        34  3907029101  ada1  GPT  (1.8T)
          34        2014        - free -  (1M)
        2048  3907027080     1  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)

=>        34  3907029101  ada2  GPT  (1.8T)
          34        2014        - free -  (1M)
        2048  3907027080     1  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)

Code:
zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0
            gpt/disk2  ONLINE       0     0     0

Code:
zfs get sync
NAME  PROPERTY  VALUE     SOURCE
tank  sync      disabled  local
 
Ignore the busy%, it's pretty much useless. However, if you look at the ms/w, you see that one disk is quite a bit slower than the other. I'd be running benchmarks and smartctl(8) (from sysutils/smartmontools) against each disk individually to find out why.

Maybe the first disk is starting to die?
 
I should have tested the drives individually. I'm seeing poor performance with dd and a zpool created from a single drive. It looks like I need to swap the drive slots first and see where the issue follows. It could be a coincidence that the drive is failing after previous testing.

I know the Seagate Green ST2000DL003 2TB drives are not ideal, but are what I had for testing. For production I will be moving over Hitachi HDS72101 1TB drives and a LSI 1068E PCIe controller.

Code:
dd if=/dev/zero of=/dev/ada1p1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 16.431876 secs (65345054 bytes/sec)

dd if=/dev/zero of=/dev/ada2p1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 10.433210 secs (102915769 bytes/sec)

Code:
ada1p1 zpool iostat 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         418M  1.81T      0    441      0  51.9M
tank         665M  1.81T      0    468      0  55.4M
tank         898M  1.81T      0    333      0  37.5M
tank        1.01G  1.81T      0    277      0  32.5M
tank        1.26G  1.81T      0    439      0  49.6M

ada2p1 zpool iostat 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         824M  1.81T      0    713      0  83.8M
tank        1.26G  1.81T      0    726      0  85.2M
tank        1.66G  1.81T      0    703      0  83.5M
tank        2.06G  1.81T      0    835      0  99.8M
tank        2.49G  1.81T      0    712      0  83.6M
 
Back
Top