ZFS slow write

I just can't understand why on 3 different disks zfs always have lower write speed than ufs. In some case differences is dramatic.
Also, I'm seeing interesting things in gstat. While there are UFS in use, busy column never get red and busy percents are steady. When I'm copying files to ZFS, some moments busy column are showing 0% but in next few seconds it's going red and some 87-110%.
In moments when with ZFS busy column is red, rsync is having some kind of freeze - current speed report is not updating anymore.

OS: FreeBSD 8.1 RC2
Controller: 9690SA-4I
RAM: 8gb
Controller write and read cache is on for all drives.

When disks is formated as ZFS they are simple pool (no raidz, etc)

Intel SSD X25-M G2 80GB
with UFS
Code:
[root@host /]# rsync --progress /home/user/test.img  /ufs-ssd/test.img
test.img
  3222128640 100%   78.84MB/s    0:00:38 (xfer#1, to-check=0/1)
sent 3222522038 bytes  received 31 bytes  81582837.19 bytes/sec

with ZFS
Code:
[root@host /]# rsync -av --progress /home/user/test.img /zfs-ssd/test.img
sending incremental file list
test.img
  3222128640 100%   46.29MB/s    0:01:06 (xfer#1, to-check=0/1)
sent 3222522042 bytes  received 31 bytes  48458978.54 bytes/sec

Seagate DiamondMax 1TB 7,2k
with UFS
Code:
[root@host /]# rsync -av --progress /home/user/test.img /ufs-1b/test.img 
test.img
  3222128640 100%   74.97MB/s    0:00:40 (xfer#1, to-check=0/1)
sent 3222522042 bytes  received 31 bytes  77651134.29 bytes/sec

with ZFS
Code:
[root@host /]# rsync -av --progress /home/user/test.img /zfs-1tb/test.img
test.img
  3222128640 100%   61.32MB/s    0:00:50 (xfer#1, to-check=0/1)
sent 3222522042 bytes  received 31 bytes  63812318.28 bytes/sec

Western Digital Raptor 150GB 10k
with UFS
Code:
[root@host /]# rsync -av --progress /home/user/test.img /ufs-150gb/test.img
xtest.img
  3222128640 100%   73.47MB/s    0:00:41 (xfer#1, to-check=0/1)
sent 3222522042 bytes  received 31 bytes  75824048.78 bytes/sec

with ZFS
Code:
[root@host /]# rsync -av --progress /home/user/test.img /zfs-150gb/test.img
test.img
  3222128640 100%   44.93MB/s    0:01:08 (xfer#1, to-check=0/1)
sent 3222522042 bytes  received 31 bytes  46367224.07 bytes/sec

Any suggestions to try out?
 
There are no raid in this case, I just copied file from singe disk to single disk.
 
When you say "to disk" it means you copy to /dev/adX :e In your case you copying to zfs pool with 1 disk ;)

Btw, what you want to see?

So why in this case its slower?
1. Its software.
2. Its stripe with 1 disk. Its still raid.
3. (too much) caching
4. zfs reservering(writing to unused) blocks to do snapshots etc (and God knows what other reasons)
5. You get scalability, but you must lose something. For example, if you add mirror vdev you will get greater read speed.. "Nature law" as you wish=)

About write blackouts - i think need to reduce write cache timeouts/buffers.
 
Tried it with 5 and 1, no big differences.
Also I don't want to believe that with ZFS you get close to 2x slower writes because of benefit that it can give.
 
I have two xeon quadra core cpu, so there can't be problem because of slow cpu.
Why would you not recommend to use rsync? Its not optimized for ZFS :)?
I believe it's quite real daily usage not some synthetic benchmarking tools.
I also got very similar results with simple cp command. SSD and 10K WD digital seems to be close to 2x faster in writing with UFS than with ZFS.
But for me most annoying thing with ZFS is periodical freeze while it's writing to disks.
 
miks said:
I have two xeon quadra core cpu, so there can't be problem because of slow cpu.
Why would you not recommend to use rsync? Its not optimized for ZFS :)?

Because what you compare is copying files UFS->UFS and UFS->ZFS.

miks said:
I believe it's quite real daily usage not some synthetic benchmarking tools.

So your daily usage is copying from UFS->ZFS? On single drives??

miks said:
I also got very similar results with simple cp command. SSD and 10K WD digital seems to be close to 2x faster in writing with UFS than with ZFS.

Yes, because even rsync is just copying your files too.

miks said:
But for me most annoying thing with ZFS is periodical freeze while it's writing to disks.

I dont think you will get happy with ZFS ... not in that way ^^
 
Because what you compare is copying files UFS->UFS and UFS->ZFS.
No, I'm copying from ZFS mirror.

So your daily usage is copying from UFS->ZFS? On single drives??
Results are same if I'm copying data from single disk zfs pool back to mirrored one.
Does raidz with 4 x 1TB drives will give much more stable write speed that 2 drives mirror?

I dont think you will get happy with ZFS ... not in that way
Writing performance is known problem with ZFS?
 
I use the command

# sysctl vfs.zfs.txg.write_limit_override=1048576000

to limit the write speed to around 100M/s.

Tweaking the above command may alleviate the periodic freeze.
 
t1066 said:
I use the command

# sysctl vfs.zfs.txg.write_limit_override=1048576000

to limit the write speed to around 100M/s.

Tweaking the above command may alleviate the periodic freeze.

Thanks, now ZFS writing is even faster than UFS!
 
t1066 said:
I use the command

# sysctl vfs.zfs.txg.write_limit_override=1048576000

to limit the write speed to around 100M/s.

Tweaking the above command may alleviate the periodic freeze.

Using it couple of weeks now and it gives me the best results so far
 
Can someone explain what this tunable is doing?
From name it seems that it's limiting write, but where? Even with UFS I got close to 80mb/s max write speed.
 
miks said:
Can someone explain what this tunable is doing?
From name it seems that it's limiting write, but where? Even with UFS I got close to 80mb/s max write speed.

You tune this variable to the max mb/s your HD can handle. this way there are a lot less write stalls.
Before that you would tune the txg wait time to 4s or 5s but that is only convenient if you'r writing a full speed.

So now the txg gets written when a. 30sec have passed or b. when x amount of data is in the txg (x= max you'r harddisk can handle)
 
I have tested with default

Code:
sysctl vfs.zfs.txg.write_limit_override=0

and

Code:
sysctl vfs.zfs.txg.write_limit_override=1048576000


on a single disk with zfs and i see no changed in the write speed.

FreeBSD 8.1-RC1 amd64
 
User23 said:
I have tested with default

Code:
sysctl vfs.zfs.txg.write_limit_override=0

and

Code:
sysctl vfs.zfs.txg.write_limit_override=1048576000


on a single disk with zfs and i see no changed in the write speed.

FreeBSD 8.1-RC1 amd64

did you remove the txg timeout in the loader.conf?
and it's no so much about the speed but to get rid of the write stalls
 
User23 said:
I have tested with default

Code:
sysctl vfs.zfs.txg.write_limit_override=0

and

Code:
sysctl vfs.zfs.txg.write_limit_override=1048576000


on a single disk with zfs and i see no changed in the write speed.

FreeBSD 8.1-RC1 amd64

What kind of disk and controller you have?
 
3ware 9550SXU-8LP with a WD5002ABYS configured as single device.
I'll check this again with a faster WD3000HLFS. Unfortunately i have no free SSD :)
 
User23 said:
3ware 9550SXU-8LP with a WD5002ABYS configured as single device.
I'll check this again with a faster WD3000HLFS. Unfortunately i have no free SSD :)

think the raidcontroller cache does help quiet a bit. ;)
 
exactly with these sysctl vfs.zfs.txg.write_limit_override=1048576000 values or little bit higher?
 
miks said:
exactly with these sysctl vfs.zfs.txg.write_limit_override=1048576000 values or little bit higher?

I'm at work right now but as far as I remember round 180.000.000. will check when I get home.
 
Back
Top