Poor RAID performance

We got a HP Proliant 385G5 with a built in Smart Array 6i controller. It runs 7.2 amd64.
Code:
~# dmesg | grep ciss
ciss0: <HP Smart Array 6i> port 0x5000-0x50ff mem 0xf7ef0000-0xf7ef1fff,0xf7e80000-0xf7ebffff irq 24 at device 4.0 on pci2
ciss0: [ITHREAD]
da0 at ciss0 bus 0 target 0 lun 0
XXX ~> uname -a
FreeBSD XXX 7.2-RELEASE-p3 FreeBSD 7.2-RELEASE-p3 #1: Mon Sep  7 13:58:02 CEST 2009     XXX:/usr/obj/usr/src/sys/XXX  amd64

The problem is an exceptionally poor disk performance
Code:
~# diskinfo -c /dev/da0
/dev/da0
        512             # sectorsize
        1199980951552   # mediasize in bytes (1.1T)
        2343712796      # mediasize in sectors
        145889          # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.

I/O command overhead:
        time to read 10MB block      0.324969 sec       =    0.016 msec/sector
        time to read 20480 sectors  20.509008 sec       =    1.001 msec/sector
        calculated command overhead                     =    0.986 msec/sector

~# diskinfo -t /dev/da0
/dev/da0
        512             # sectorsize
        1199980951552   # mediasize in bytes (1.1T)
        2343712796      # mediasize in sectors
        145889          # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.

Seek times:
        Full stroke:      250 iter in   2.310384 sec =    9.242 msec
        Half stroke:      250 iter in   1.814658 sec =    7.259 msec
        Quarter stroke:   500 iter in   2.048750 sec =    4.098 msec
        Short forward:    400 iter in   1.807276 sec =    4.518 msec
        Short backward:   400 iter in   2.223202 sec =    5.558 msec
        Seq outer:       2048 iter in   2.106676 sec =    1.029 msec
        Seq inner:       2048 iter in   2.100308 sec =    1.026 msec
Transfer rates:
        outside:       102400 kbytes in   3.229837 sec =    31704 kbytes/sec
        middle:        102400 kbytes in   2.959831 sec =    34597 kbytes/sec
        inside:        102400 kbytes in   3.346279 sec =    30601 kbytes/sec

~# dd if=/dev/zero of=/usr/testfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 15.431867 secs (6794875 bytes/sec)

The drives seems to be in good health:
Code:
~# cciss_vol_status /dev/ciss0
/dev/ciss0: (Smart Array 6i) RAID 5 Volume 0 status: OK.   At least one spare drive designated.  At least one spare drive remains available.

I tried updating all the firmwares on the server, but that didn't do anything to the disk performance.

Does anybody know what could be causing the problem?
 
What do you get from dd if you use /dev/random? I've found writing zeroes into a file to be deceptive when benchmarking. And what do you get if you swap if= and of= and use /dev/null instead of /dev/zero? (ie, read the written file)

What do you get if you use an actual disk benchmarking program like bonnie++ or iozone? Afterall, dd is not a benchmarking tool. :)
 
ShruggingAtlas: You were right! It had a faulty BBWC (backup battery write cache or something), I replaced it and I'm now getting 130MB/s performance, talk about difference. Thanks! Awesome! :)
 
Back
Top