ZFS working strange

Hi all,
I decided to try out ZFS on one of my FreeBSD server, becaus my HDD 160 GB begun to end.

My System:
Code:
FreeBSD 8.1 Release, i386
CPU: Intel(R) Pentium(R) Dual  CPU  E2220  @ 2.40GHz (2394.02-MHz 686-class CPU)
2 GB DDRII
ad7: Old Drive, 160 GB SATA II, UFS2, system drive
ad8, ad9: New drives 1Tb SATA II.
Diskinfo:
Code:
ad7 >
Transfer rates:
        outside:       102400 kbytes in   0.982162 sec =   104260 kbytes/sec
        middle:        102400 kbytes in   1.109304 sec =    92310 kbytes/sec
        inside:        102400 kbytes in   1.868130 sec =    54814 kbytes/sec

ad8 >
Transfer rates:
        outside:       102400 kbytes in   0.825577 sec =   124034 kbytes/sec
        middle:        102400 kbytes in   0.969241 sec =   105650 kbytes/sec
        inside:        102400 kbytes in   1.607994 sec =    63682 kbytes/sec

ad9 >
Transfer rates:
        outside:       102400 kbytes in   0.780139 sec =   131259 kbytes/sec
        middle:        102400 kbytes in   0.962239 sec =   106418 kbytes/sec
        inside:        102400 kbytes in   1.629290 sec =    62849 kbytes/sec

I created zmirror pool 'zfs' with ad8 and ad9 and File System '/usr', which replaced /usr in my System.
Old /usr now mounting in /oldoldusr

Kernel was rebuild with
Code:
options         KVA_PAGES=512

Loader.conf:
Code:
zfs_load="YES"
u3g_load="YES"
speaker_load="YES"

vm.kmem_size_min=999M
vm.kmem_size_max=999M
arc_summary.pl, some:
Code:
ZFS Tunable (sysctl):
        kern.maxusers=384
        vfs.zfs.l2c_only_size=0
        vfs.zfs.mfu_ghost_data_lsize=0
        vfs.zfs.mfu_ghost_metadata_lsize=0
        vfs.zfs.mfu_ghost_size=0
        vfs.zfs.mfu_data_lsize=75070976
        vfs.zfs.mfu_metadata_lsize=906752
        vfs.zfs.mfu_size=75977728
        vfs.zfs.mru_ghost_data_lsize=0
        vfs.zfs.mru_ghost_metadata_lsize=0
        vfs.zfs.mru_ghost_size=0
        vfs.zfs.mru_data_lsize=94218240
        vfs.zfs.mru_metadata_lsize=718336
        vfs.zfs.mru_size=109753856
        vfs.zfs.anon_data_lsize=0
        vfs.zfs.anon_metadata_lsize=0
        vfs.zfs.anon_size=0
        vfs.zfs.l2arc_norw=1
        vfs.zfs.l2arc_feed_again=1
        vfs.zfs.l2arc_noprefetch=0
        vfs.zfs.l2arc_feed_min_ms=200
        vfs.zfs.l2arc_feed_secs=1
        vfs.zfs.l2arc_headroom=2
        vfs.zfs.l2arc_write_boost=8388608
        vfs.zfs.l2arc_write_max=8388608
        vfs.zfs.arc_meta_limit=163676160
        vfs.zfs.arc_meta_used=20034788
        vfs.zfs.mdcomp_disable=0
        vfs.zfs.arc_min=81838080
        vfs.zfs.arc_max=654704640
        vfs.zfs.zfetch.array_rd_sz=1048576
        vfs.zfs.zfetch.block_cap=256
        vfs.zfs.zfetch.min_sec_reap=2
        vfs.zfs.zfetch.max_streams=8
        vfs.zfs.prefetch_disable=1
        vfs.zfs.check_hostid=1
        vfs.zfs.recover=0
        vfs.zfs.txg.write_limit_override=0
        vfs.zfs.txg.synctime=5
        vfs.zfs.txg.timeout=30
        vfs.zfs.scrub_limit=10
        vfs.zfs.vdev.cache.bshift=16
        vfs.zfs.vdev.cache.size=10485760
        vfs.zfs.vdev.cache.max=16384
        vfs.zfs.vdev.aggregation_limit=131072
        vfs.zfs.vdev.ramp_rate=2
        vfs.zfs.vdev.time_shift=6
        vfs.zfs.vdev.min_pending=4
        vfs.zfs.vdev.max_pending=35
        vfs.zfs.cache_flush_disable=0
        vfs.zfs.zil_disable=0
        vfs.zfs.zio.use_uma=0
        vfs.zfs.version.zpl=3
        vfs.zfs.version.vdev_boot=1
        vfs.zfs.version.spa=14
        vfs.zfs.version.dmu_backup_stream=1
        vfs.zfs.version.dmu_backup_header=2
        vfs.zfs.version.acl=1
        vfs.zfs.debug=0
        vfs.zfs.super_owner=0
        vm.kmem_size=1047527424
        vm.kmem_size_scale=3
        vm.kmem_size_min=1047527424
        vm.kmem_size_max=1047527424
------------------------------------------------------------------------
Now i reboot, after system back to life, i am doing:
Code:
cmdwatch “perl arc_summary.pl | sed -n '/ARC Size:/,/(c_max)/p'”
cmdwatch “vmstat –h”
dd if=/dev/zero of=/usr/testzero1 bs=1024k count=2048
In Screen in "multiscreen in one window" or somethink like this mode.

Before i do anything –
Code:
memory  
avm    fre   
787M  1257M   

ARC Size:
Current Size:61.35%383.06M (arcsize)
Target Size: (Adaptive)100.00%624.38M (c)
Min Size (Hard Limit):12.50%78.05M (c_min)
Max Size (High Water):~8:1624.38M (c_max)
After dd –
Code:
memory  
avm    fre   
844M   824M

ARC Size:
Current Size:96.92%605.15M (arcsize)
Target Size: (Adaptive)96.88%604.86M (c)
Min Size (Hard Limit):12.50%78.05M (c_min)
Max Size (High Water):~8:1624.38M (c_max)

2147483648 bytes transferred in 28.818070 secs (74518649 bytes/sec)
Normal speed, but it mast be better

Retry previos opperation,
Code:
2147483648 bytes transferred in 28.573045 secs (75157676 bytes/sec)

Other info almost the same
BUT, after dd to the oldoldusr
Code:
dd if=/dev/zero of=/oldoldusr/testzero1 bs=1024k count=2048
Situation changing for the worse:
Code:
memory  
avm    fre   
841M    97M

ARC Size:
Current Size:37.46%233.86M (arcsize)
Target Size: (Adaptive)37.46%233.91M (c)
Min Size (Hard Limit):12.50%78.05M (c_min)
Max Size (High Water):~8:1624.38M (c_max)

2147483648 bytes transferred in 20.898474 secs (102757917 bytes/sec)
By the way, UFS2 is faster than ZFS!!!

Rerun dd to /usr
Code:
memory  
avm    fre   
839M   140M

ARC Size:
Current Size:35.16%219.52M (arcsize)
Target Size: (Adaptive)35.16%219.52M (c)
Min Size (Hard Limit):12.50%78.05M (c_min)
Max Size (High Water):~8:1624.38M (c_max)

2147483648 bytes transferred in 46.483443 secs (46198894 bytes/sec)
So slow!
If we rerun dd to /oldoldusr couple of times, and after that, rerun dd to /usr the situation will be dramatically bad:
Code:
memory  
avm    fre   
846M    85M

ARC Size:
Current Size:12.50%78.04M (arcsize)
Target Size: (Adaptive)12.50%78.05M (c)
Min Size (Hard Limit):12.50%78.05M (c_min)
Max Size (High Water):~8:1624.38M (c_max)

2147483648 bytes transferred in 347.816170 secs (6174192 bytes/sec)
After reboot all will be OK.

Please somebody help me to defeat this beast!
 
dd is not the best choice to benchmark your harddrives. Choose something else, maybe multithreaded. Search for "iozone" in the forum for example.

To reduce the caching effects of the OS just test with a filesize that is at least 2x bigger than your RAM.
 
The reason I suggested limiting the UFS cache (which you'll do by adjusting kern.maxfiles, iirc), is that UFS cache and ZFS cache don't share memory very well (if at all).

In order to improve ZFS performance, you need to ensure ZFS has access to adequate ammounts of memory. :)
 
Back
Top