ZFS write performance

Hello all. I have done a bit of googling, but have come up empty so far.
I recently installed freebsd 8.2-stable so I could try out the zfs v28 pools.
I have a 7 disk zraid configured and am getting terrible write performance.
A test write using rsync between my boot drive and the array came back with:
Code:
 /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=30000
30000+0 records in
30000+0 records out
30720000 bytes transferred in 7.447608 secs (4124814 bytes/sec)
	9.20s real		0.00s user		0.38s sys

My drives are WD green drives(boot and array)

Code:
zpool status array
  pool: array
 state: ONLINE
 scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	array       ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    ad4     ONLINE       0     0     0
	    ad6     ONLINE       0     0     0
	    ad10    ONLINE       0     0     0
	    ad12    ONLINE       0     0     0
	    ad14    ONLINE       0     0     0
	    ad16    ONLINE       0     0     0
	    ad20    ONLINE       0     0     0

errors: No known data errors
Any suggestions for tuning or maybe a common newbie mistake when I configured it?
Any pointers would be welcome.
 
With 7 disks, these are possible to increase performance:
RAIDZ(3) + RAIDZ(3) + HOTSPARE(1)
MIRROR(2) + MIRROR(2) + MIRROR(2) + HOTSPARE(1)
 
This kind of abysmal is most likely the result of misaligned access to 4kB disks. Are the Windows XP compatibility jumpers set? What the value of ashift? 9 or 12? (it should never be less than 12 on 4kB sector disks).
 
ravage382 said:
Hello all. I have done a bit of googling, but have come up empty so far.
I recently installed freebsd 8.2-stable so I could try out the zfs v28 pools.
I have a 7 disk zraid configured and am getting terrible write performance.
A test write using rsync between my boot drive and the array came back with:
Code:
 /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=30000
30000+0 records in
30000+0 records out
30720000 bytes transferred in 7.447608 secs (4124814 bytes/sec)
	9.20s real		0.00s user		0.38s sys

For dd you give very small bs, give it some 500MB's, things will fly
 
I have checked and none of the drives have jumpers set. Also, during real world usage, the write speeds are nearly identical to the sample dd I posted.
I really need to keep the setup with the 6 data drives and 1 hotspare. This array is going to be used for multimachine backups. I was hoping to be able to use zfs for its deduping ability.
Is there anything else I should check tuning wise?
When I created the array, I had fresh out of the box drives and created the array with:
zpool create array raidz drive1 drive2 drive3 drive4 drive5 drive6 drive7
 
Today I booted off a linux cd and formatted the individual drives in the array. I did rsync tests on each of the drives and they ranged from 73MB/s to 85MB/s. I performed the tests with a 1GB file each time. I had thought one of the drives was spinning slowly(possibly defective) but I no longer think this is the case.
 
Why you say rsync, when you used dd?

You apparently have 4k sector drives. You need to create your zpool like this:

# gnop -S 4096 drive1
# zpool create array raidz drive1.nop drive2 drive3 drive4 drive5 drive6 drive7
# zpool export array
# gnop destroy drive1.nop
# zpool import array

This will make sure your zpool is created with ashift=12, which is neccesary if you use 4k sector drives, that lie about their real geometry (in order to support legacy Windows).

Then test write speed with

# /usr/bin/time -h dd if=/dev/zero of=/array/sometestfile bs=128k count=30000
 
Back
Top