RAID performance

urosgruber

Member


Messages: 30

Hi,

I was testing two servers yesterday and unless i misread something the more powerful one have very bad performance.

First one is i3 3GHz with 4G ram and 4 SATA disks in RAIDZ1 and running on FreeBSD 8.1

I run bonnie++ with

[CMD=""]bonnie++ -u root -d . -s 7876M -n 10:102400:1024:1024[/CMD]

and results are

Code:
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
backup-zfs.co 7876M   181  99 157889  23 85432  15   411  98 223227  18 105.0   1
Latency             46275us    1830ms    2083ms   66716us     356ms    1034ms
Version  1.96       ------Sequential Create------ --------Random Create--------
backup-zfs.computer -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
10:102400:1024/1024  1162  25  1707  12  9153  43  1172  24   357   3 11428  54
Latency              3003ms   95637us      91us    3158ms     178ms     127us
Second one is dual quad Xeon 2.5GHz with 16G ram and 8 SATA disks in RAID10 connected to 3ware 9650SE and running on Freebsd 7.2

bonnie++ was run with

[CMD=""]bonnie++ -u root -d . -s 36372M -n 10:102400:1024:1024[/CMD]

and results are

Code:
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
host1        36372M   547  99 226442  44 57387  14   932  96 202035  29 419.0  14
Latency             27493us     571ms     488ms     112ms     244ms     249ms
Version  1.96       ------Sequential Create------ --------Random Create--------
host1               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
10:102400:1024/1024  1441  20  6322  44 +++++ +++  1843  23  8104  58 +++++ +++
Latency               237ms     212us   15279us     698ms   59818us      66us

write cache is enabled on 3ware

Code:
//host1> info c0

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-10   OK             -       -       256K    931.281   RiW    OFF
I upgraded firmware to the latest version

Code:
twa0: INFO: (0x15: 0x1300): Controller details:: Model 9650SE-8LPML, 8 ports, Firmware FE9X 4.10.00.007, BIOS BE9X 4.08.00.002
FreeBSD driver is

Code:
3ware device driver for 9000 series storage controllers, version: 3.70.05.001
I think that HW raid10 with double amount of disks might give better result that SW raidz. I doubt that 8.1 is so much faster than 7.2

Is there anything I can test or optimize to make that 3ware faster.

regards

Uros
 

danbi

Active Member

Reaction score: 30
Messages: 227

Welcome to the wonderful world of system design :)

Back in the days of their glory, "hardware" RAID controllers shined, because CPUs were generally slower, I/O channels restricted in availability and file systems not that advanced. Also, it was very 'modern' to encapsulate the storage management from the rest of the system.

If you connect your drives to fast enough bus(es) then 'software' RAID will always be much, much faster than 'hardware' RAID. Truth is, there is no such thing as hardware RAID! It is just an RAID software running in embeded in the controller micro-computer. Now, you can easily imagine difference in power between any current CPU and what might be embedded in the controller, unless you spend for the controller way more money than you spend for the host. Also, miniaturization costs more...

Things are worse for 'hardware' controllers now as disks become faster and faster. It is just plain stupid to connect SSDs via 'hardware' RAID controllers.
 

danstoner

New Member


Messages: 4

Maybe *I* am misreading something, but my take on your bonnie++ output shows that the second system (Xeon with 8 drives) is faster in almost every respect (latency is lower).

us = microsecond
ms = millisecond


If you look at Random Read column:

59818us < 178ms

Which metrics are you concerned about?
 
OP
OP
urosgruber

urosgruber

Member


Messages: 30

It look like I missread Random read column. I didn't notice microseconds. But it's also another metrics i'm concerned about. It's sequetial read which is kinda slow. With raid10 there are 4 raid1 devices stacked together into raid0. So on paper if one drive can output about 80-100MB sum of all could be a little above 320MB/s. I manage to get it around 240MB/s while changing sysctl for read_max to

Code:
vfs.read_max=128
In about two weeks I'll be testing similar HW and see if it's better to convert controller to plain SATA and run ZFS instead with some 30G SSD for chaching.
 

danbi

Active Member

Reaction score: 30
Messages: 227

Is your 3ware volume UFS? If so, you are also comparing two very different filesystems.
There are noticeable performance improvements in 8-stable (post 8.1) with ZFS v15.
 
OP
OP
urosgruber

urosgruber

Member


Messages: 30

Yes this is true that I'm testing two different FS. but still ;) One thing I'm curious about is sector size. I was reading

http://forums.freebsd.org/showpost.php?p=76148&postcount=38

and start asking myself does it matter for HW raid device. What is the sector size anyway? Diskinfo reports 512B. Does Stripe size makes any diffecence? Right now RAID is configured with stripe at 256kB which is relatively high.

regards

Uros
 
Top