ZFS Write Speed Very Inconsistent

And Last:

Code:
tank/lptdrv   type                  filesystem             -
tank/lptdrv   creation              Wed Jul 14 14:37 2010  -
tank/lptdrv   used                  24.0G                  -
tank/lptdrv   available             2.16T                  -
tank/lptdrv   referenced            24.0G                  -
tank/lptdrv   compressratio         1.04x                  -
tank/lptdrv   mounted               yes                    -
tank/lptdrv   quota                 none                   default
tank/lptdrv   reservation           none                   default
tank/lptdrv   recordsize            128K                   default
tank/lptdrv   mountpoint            /mnt/lptdrv            local
tank/lptdrv   sharenfs              off                    default
tank/lptdrv   checksum              on                     default
tank/lptdrv   compression           gzip-9                 local
tank/lptdrv   atime                 off                    inherited from tank
tank/lptdrv   devices               on                     default
tank/lptdrv   exec                  on                     default
tank/lptdrv   setuid                on                     default
tank/lptdrv   readonly              off                    default
tank/lptdrv   jailed                off                    default
tank/lptdrv   snapdir               hidden                 default
tank/lptdrv   aclmode               groupmask              default
tank/lptdrv   aclinherit            restricted             default
tank/lptdrv   canmount              on                     default
tank/lptdrv   shareiscsi            off                    default
tank/lptdrv   xattr                 off                    temporary
tank/lptdrv   copies                1                      default
tank/lptdrv   version               3                      -
tank/lptdrv   utf8only              off                    -
tank/lptdrv   normalization         none                   -
tank/lptdrv   casesensitivity       sensitive              -
tank/lptdrv   vscan                 off                    default
tank/lptdrv   nbmand                off                    default
tank/lptdrv   sharesmb              off                    default
tank/lptdrv   refquota              none                   default
tank/lptdrv   refreservation        none                   default
tank/lptdrv   primarycache          all                    default
tank/lptdrv   secondarycache        all                    default
tank/lptdrv   usedbysnapshots       0                      -
tank/lptdrv   usedbydataset         24.0G                  -
tank/lptdrv   usedbychildren        0                      -
tank/lptdrv   usedbyrefreservation  0                      -
 
Thank You there for the pastie advice :p

Another update this morning i woke up to find server crashed once again.

Code:
Fatal trap 12: page fault while in kernel mode
Code:
current process = 42 (arc_reclaim_thread)
 
Also i do get the same issue's with
Code:
vfs.zfs.txg.timeout="5"
removed.

In the interim i learnt somewhere that if that is specified zfs will not abide by the
Code:
vfs.zfs.txg.write_limit_override=524288000
setting.

So It's either or. But none have solved my issues.
 
I have had rather unstable system, until I followed the advice to declare 1.5 times the RAM as kmem_size. So, for a 8GB RAM I have in /boot/loader.conf

Code:
vm.kmem_size="12G"
vfs.zfs.arc_max="2G"

The arc_max limit is there because the computer is rather memory loaded (servers several diskless KDE desktops). Since this 'fix' no issues with ZFS at all. If your usage is primarily storage, this may not be useful for you. On another (single-user, but rather loaded) system with the same spec I don't limit arc_max without ill effects.

If you need 'stable' write performance, I guess you should decrease vfs.zfs.txg.write_limit_override until you get to the point where there are no hiccups. You may monitor drive load with gstat and probably tune until load is just under 100% :)

Also, the advice to set

Code:
vfs.zfs.vdev.min_pending=4
vfs.zfs.vdev.max_pending=8

is very reasonable. You could even try lower values :)
 
Back
Top