RAID5 works too slow

Hi!
Decide to upgrade file storage to RAID5 with 3x2TB WDC WD20EARS. Actually my first experience with RAID. Done everything according manual, RAID working, but sooooo slow! File copy to RAID is about 100KB/s
Code:
Source
video/srt.avi
Target
/raid/video/srt.avi
   [                                        ]  36%
ETA 0:03.24 (94.85 KB/s)
─────────────── Total: 66M of 253M   ─────────────
   [                                        ]  26%
Files processed: 1 of 2
Time: 0:10.21  ETA 0:29.37 (108.17 KB/s)
Speed from RAID is good (cannot say for sure, but copy back in a second)
I know, that RAID5 is not so fast on write operations as RAID0, but no so.
I use FreeBSD 8.1-RELEASE, custom kenel.
Platform - Intel E3400, Gigabyte GA-G31M-ES2L, 2Gb RAM, WDC WD5000AAKX as system drive.
Gvinum output is:
Code:
gvinum -> l                                                                 
3 drives:                                                                   
D hdd1                  State: up       /dev/ad1        A: 1/1907728 MB (0%)
D hdd2                  State: up       /dev/ad2        A: 0/1907727 MB (0%)
D hdd3                  State: up       /dev/ad3        A: 1/1907728 MB (0%)

1 volume:                                                                   
V raid                  State: up       Plexes:       1 Size:       3726 GB 

1 plex:                                                                     
P raid.p0            R5 State: up       Subdisks:     3 Size:       3726 GB 

3 subdisks:                                                                 
S raid.p0.s0            State: up       D: hdd1         Size:       1863 GB 
S raid.p0.s1            State: up       D: hdd2         Size:       1863 GB 
S raid.p0.s2            State: up       D: hdd3         Size:       1863 GB
df and mount output:
Code:
[root@dev]# df -h                                 
Filesystem          Size    Used   Avail Capacity  Mounted on
/dev/ad0s1a         496M    355M    101M    78%    /
devfs               1.0K    1.0K      0B   100%    /dev
/dev/ad0s1e         496M     16K    456M     0%    /tmp
/dev/ad0s1f         433G    2.5G    396G     1%    /usr
/dev/ad0s1d         2.9G    2.6M    2.6G     0%    /var
/dev/ad0s1g          10G    926M    8.8G     9%    /www
/dev/gvinum/raid    3.6T    8.0K    3.3T     0%    /raid

[root@dev]# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local, multilabel)
/dev/ad0s1e on /tmp (ufs, local, soft-updates)
/dev/ad0s1f on /usr (ufs, local, soft-updates)
/dev/ad0s1d on /var (ufs, local, soft-updates)
/dev/ad0s1g on /www (ufs, local, soft-updates)
/dev/gvinum/raid on /raid (ufs, local)
My kernel:
Code:
cpu             I686_CPU     
ident           DEV          
                             
options         SCHED_ULE    
options         PREEMPTION   
options         INET         
options         FFS          
options         SOFTUPDATES  
options         UFS_ACL      
options         UFS_DIRHASH  
options         UFS_GJOURNAL 
options         PROCFS       
options         PSEUDOFS     
options         GEOM_PART_GPT
options         GEOM_LABEL   
options         COMPAT_43TTY 
options         SYSVSHM
options         SYSVMSG
options         SYSVSEM
options         P1003_1B_SEMAPHORES
options         _KPOSIX_PRIORITY_SCHEDULING
options         PRINTF_BUFR_SIZE=128
options         HWPMC_HOOKS        
options         AUDIT              
options         MAC                
options         FLOWTABLE          
options         SMP 

device          apic
device          cpufreq
device          acpi
device          pci 
device          ata    
device          atadisk
device          ataraid
options         ATA_STATIC_ID
device          atkbdc
device          atkbd 
device          vga
device          sc
device          agp
device          pmtimer
device          uart
device          miibus
device          alc   
device          loop  
device          random
device          ether 
device          pty   
device          bpf
Could someone help on this case?
 
Yes, they are 4k. But should I preformat all disks before adding to RAID? I was pretty sure, that gvinum should do all this when creating RAID?
I was doing following:
Create gvinum.conf:
Code:
drive hdd1 device /dev/ad1
drive hdd2 device /dev/ad2
drive hdd3 device /dev/ad3
volume raid               
plex org raid5 256k       
sd len 1907727m drive hdd1
sd len 1907727m drive hdd2
sd len 1907727m drive hdd3
and than:
Code:
[root@dev]# gvinum create /usr/local/etc/gvinum.conf
[root@dev]# newfs /dev/gvinum/raid
 
I haven't come across them myself. But I've seen several posts about slow performance that seem to be related to those 4K sectors. As far as I understood it's mainly due to misalignment.

Try adding a partition to the raid set as indicated by the thread I mentioned before. See if that improves things.
 
Back
Top