Solved [solved]Poor raid performance Areca 1220 Areca

Hi
I hope I post this in the right subforum, as I'm not sure if this is a hardware problem or a software problem..
I'm setting up a new(old) samba/plex server for my home network using FreeBSD 10.0 and I've run into a problem with drive performance.

Code:
CPU AMD Opteron 185
RAM 2gig
Graphics  old matrox
OS disk OCZ SSD 32gb
Areca 1220 RAID w/256mb cache
4x1gb HDDs in raid5
gbit network

Code:
 # diskinfo -tv /dev/da0
/dev/da0
        512             # sectorsize
        2999999791104   # mediasize in bytes (2.7T)
        5859374592      # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        364729          # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.
        0000000813199109        # Disk ident.

Seek times:
        Full stroke:      250 iter in   5.572017 sec =   22.288 msec
        Half stroke:      250 iter in   4.433951 sec =   17.736 msec
        Quarter stroke:   500 iter in   1.950317 sec =    3.901 msec
        Short forward:    400 iter in   0.373038 sec =    0.933 msec
        Short backward:   400 iter in   3.355420 sec =    8.389 msec
        Seq outer:       2048 iter in  24.515767 sec =   11.971 msec
        Seq inner:       2048 iter in   0.120662 sec =    0.059 msec
Transfer rates:
        outside:       102400 kbytes in  18.074818 sec =     5665 kbytes/sec
        middle:        102400 kbytes in  17.395760 sec =     5886 kbytes/sec
        inside:        102400 kbytes in  17.638768 sec =     5805 kbytes/sec
Code:
# Device        Mountpoint      FStype  Options Dump    Pass#
/dev/ada0p2     /               ufs     rw      1       1
/dev/ada0p3     none            swap    sw      0       0
/dev/da0s1      /raid           ufs     rw      2       2
in comparison, the SSD yeilds 138550kbytes/sec..

I used to run this machine with freebsd 8.0 as a pure samba server with the same disks in software raid on the internal sata controller and performance has been superb, but as I wanted to start using Plex I had to upgrade to 10.0-RELEASE. installed the Areca card and moved the disks to it and installed the SSD for OS. everything is pretty much standard install.

What could be the problem here?
 
Re: Poor raid performance Areca 1220 Areca

One of the reasons why things may not perform as they should is with newer harddisks that have 4K sectors. If the partitions aren't correctly aligned performance drops significantly.
 
Re: Poor raid performance Areca 1220 Areca

The real question is: what performance do you need? What is your workload? Is this "good enough"? Most likely, with only 6 MB/s from the four disks together (they should be capable of several hundred), the answer is that it is not "good enough".

What IO size does diskinfo use? You'll have to look that up in the source code.
What is the stripe or track or block size of the RAID array (all these terms mean fundamentally the same thing)? That should be a configurable option in the RAID configuration interface.
Do you have a battery backup module on the RAID controller? Some controllers refuse to cache data without battery backup, while other controllers will do a read cache and writethrough write cache without battery, but writebehind only when a battery (or NVRAM/Flash/supercap etc.) is present.

My suspicion: The IOs that diskinfo uses are considered "small" by the RAID array, and cause it to do partial track reads (at least you are using diskinfo on the whole disk, not a partition, so sector size should not be an issue). For best performance, the IO size of the benchmark / filesystem / database that uses a RAID array needs to be tuned. On some RAID controllers, partial track reads (at least of certain sizes) can be promoted to full track reads and then cached, but support for that varies. Even if diskinfo uses large blocks, it's possible that the RAID controller has deliberately misaligned the blocks (to compensate for partitioning schemes), and the large-block IOs from diskinfo are being split across tracks because of that.

I would study the documentation for your RAID card, and then do hand-written low-level benchmarks (where you control the benchmark tool). Try this: unmount the file system, and access the raw device with RAID and dd, like this: dd if=/dev/daxx of=/dev/zero bs=16m count=1000, and see what it reports for IO times (I'm assuming that the block size of the RAID controller is 16MiB or less). If the result matches diskinfo, then you'll have to be tricky to see what IO size and alignment gives better results.
 
Re: Poor raid performance Areca 1220 Areca

Not good enough, yeah that's a fair assumption.. :)
I think this has been a combination of both partition misalignment and poor raid-card settings. Just about everything that would help hdd's was turned off in the raid-card, duh!

so I've done a full wipe of the system, and aligned the partition to the block size of the raidset
Code:
gpart create -s gpt da0
gpart add -t freebsd-ufs -a 32k da0

tested entire disk
Code:
diskinfo -tv /dev/da0
[snip]
Seek times:
        Full stroke:      250 iter in   6.663280 sec =   26.653 msec
        Half stroke:      250 iter in   3.953738 sec =   15.815 msec
        Quarter stroke:   500 iter in   2.010512 sec =    4.021 msec
        Short forward:    400 iter in   0.368244 sec =    0.921 msec
        Short backward:   400 iter in   3.520995 sec =    8.802 msec
        Seq outer:       2048 iter in   0.659357 sec =    0.322 msec
        Seq inner:       2048 iter in   0.236420 sec =    0.115 msec
Transfer rates:
        outside:       102400 kbytes in   0.389416 sec =   262958 kbytes/sec
        middle:        102400 kbytes in   0.596504 sec =   171667 kbytes/sec
        inside:        102400 kbytes in   0.869253 sec =   117802 kbytes/sec

tested aligned partition
Code:
diskinfo -t /dev/da0p1
[snip]
Seek times:
        Full stroke:      250 iter in   1.894673 sec =    7.579 msec
        Half stroke:      250 iter in   1.652895 sec =    6.612 msec
        Quarter stroke:   500 iter in   1.948587 sec =    3.897 msec
        Short forward:    400 iter in   0.369785 sec =    0.924 msec
        Short backward:   400 iter in   3.525371 sec =    8.813 msec
        Seq outer:       2048 iter in   0.695330 sec =    0.340 msec
        Seq inner:       2048 iter in   0.138957 sec =    0.068 msec
Transfer rates:
        outside:       102400 kbytes in   0.380201 sec =   269331 kbytes/sec
        middle:        102400 kbytes in   0.595459 sec =   171968 kbytes/sec
        inside:        102400 kbytes in   0.875437 sec =   116970 kbytes/sec

Code:
# dd if=/dev/da0p1 of=/dev/zero bs=16m count=1000
1000+0 records in
1000+0 records out
16777216000 bytes transferred in 64.406877 secs (260487960 bytes/sec)
Code:
dd if=/dev/da0p1 of=/dev/zero bs=32k count=1000000
1000000+0 records in
1000000+0 records out
32768000000 bytes transferred in 127.989783 secs (256020436 bytes/sec)


dd is consistent with diskinfo. I'm happy with these numbers, it's a huge improvement from the ~5mb/s to 270mb/s peak performance

What is really interesting is how the seek times drastically dropped going from testing the disk to the aligned partition. almost 20ms improvement is going to be noticed :)

Thank you both for the pointers that helped solve my problem :)
 
Back
Top