I've seen a couple of other posts about using zfs on WD EARS drives which have the fun
feature of using 4096 byte sectors while reporting 512 byte sectors to the OS.
I tried the trick of using gnop to make a new device that reports itself as having 4096 byte sectors, but the improvements seem to be superficial.
system specs:
22x 2TB WD 20 EARS attached to a single 3ware 9690SA4I with a SAS expander in the backplane.
they are configured as 22 separate 'single' units on the controller, and the cache is on.
[CMD=]tw_cli /c0 show[/cmd]
cpu is a Xeon 3450 @ 2.67 Ghz.
8GB ECC DDR3
FreeBSD 8.1-BETA1 amd64
the drives are in 3x (7 disk raidz) configuration. I've tried using the last drive as a
hot spare, log device and cache, all of which have the same performance issues.
Before gnop, I was getting about 1.5 MB/s write speeds.
Now it looks like i'm getting about 3 MB/s
I've tested the bandwidth to the drives, and found that using 20 simultaneous dd processes, I can write 15-30MB/s to each drive.
I've done tests dd'ing into a file on the zfs file system, copying a file using rsync, nfs
and just copying a file from a local UFS2 partition.
looking at the drives with zpool iostat, it looks like writes are occurring sporadically. (this is with gnop)
[CMD=]zpool iostat 1[/cmd]
I've also seen posts about writes starving everything else, causing stalls, and only intermittent disk activity,
so I attempted to follow the advice for that by setting vfs.zfs.txg.write_limit_override
I am unsure of the units for that, so I tried a bunch of different values, 256, 262144, and 268435456. (and some others)
I mostly noticed either no change in the speed, or it dropping to under 100k/s (262144 did that.)
I've tried to use only 4 disks, and it looked like it was working better, but I did not do thorough testing
I'm sure there is more useful information that I haven't provided, and I am more than willing to provide anything that I've left out.
feature of using 4096 byte sectors while reporting 512 byte sectors to the OS.
I tried the trick of using gnop to make a new device that reports itself as having 4096 byte sectors, but the improvements seem to be superficial.
system specs:
22x 2TB WD 20 EARS attached to a single 3ware 9690SA4I with a SAS expander in the backplane.
they are configured as 22 separate 'single' units on the controller, and the cache is on.
[CMD=]tw_cli /c0 show[/cmd]
Code:
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 SINGLE OK - - - 1862.63 RiW OFF
cpu is a Xeon 3450 @ 2.67 Ghz.
8GB ECC DDR3
FreeBSD 8.1-BETA1 amd64
the drives are in 3x (7 disk raidz) configuration. I've tried using the last drive as a
hot spare, log device and cache, all of which have the same performance issues.
Code:
vm.kmem_size="4G"
vm.kmem_size_max="4G"
vm.zfs.arc_max="2G"
Before gnop, I was getting about 1.5 MB/s write speeds.
Now it looks like i'm getting about 3 MB/s
I've tested the bandwidth to the drives, and found that using 20 simultaneous dd processes, I can write 15-30MB/s to each drive.
I've done tests dd'ing into a file on the zfs file system, copying a file using rsync, nfs
and just copying a file from a local UFS2 partition.
looking at the drives with zpool iostat, it looks like writes are occurring sporadically. (this is with gnop)
[CMD=]zpool iostat 1[/cmd]
Code:
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
store 114G 37.8T 1 19 127K 1002K
store 114G 37.8T 0 0 0 0
store 114G 37.8T 0 0 0 0
store 114G 37.8T 2 252 12.0K 12.2M
store 114G 37.8T 0 0 0 0
store 114G 37.8T 1 118 7.97K 10.9M
I've also seen posts about writes starving everything else, causing stalls, and only intermittent disk activity,
so I attempted to follow the advice for that by setting vfs.zfs.txg.write_limit_override
I am unsure of the units for that, so I tried a bunch of different values, 256, 262144, and 268435456. (and some others)
I mostly noticed either no change in the speed, or it dropping to under 100k/s (262144 did that.)
I've tried to use only 4 disks, and it looked like it was working better, but I did not do thorough testing
I'm sure there is more useful information that I haven't provided, and I am more than willing to provide anything that I've left out.