ZFS / UFS / Soft Updates / GJurnal / Bonnie Performance

thanks to the OP, only his test seems credible as the others do not have comparative tests alongside their zfs.
 
4x 1TB samsung F3 disks onboard sata300 controller
Asus A8N with AMD X2 4600 CPU
4GB ram ddr
8-Stable
ZFS raid10 (striped mirrors) with 2,7GB arc cache, pool is 48% full and prefetch is enabled

Code:
Record Size 128 KB
	File size set to 4194304 KB
	No retest option selected
	Command line used: iozone -C -t1 -r128k -s4g -i0 -i1 -i2 -+n
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 1 process
	Each process writes a 4194304 Kbyte file in 128 Kbyte records

	Children see throughput for  1 initial writers 	=  222891.94 KB/sec
	Parent sees throughput for  1 initial writers 	=  174595.94 KB/sec
	Min throughput per process 			=  222891.94 KB/sec 
	Max throughput per process 			=  222891.94 KB/sec
	Avg throughput per process 			=  222891.94 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  222891.94 KB/sec

	Children see throughput for  1 readers 		=  349713.72 KB/sec
	Parent sees throughput for  1 readers 		=  349674.88 KB/sec
	Min throughput per process 			=  349713.72 KB/sec 
	Max throughput per process 			=  349713.72 KB/sec
	Avg throughput per process 			=  349713.72 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  349713.72 KB/sec

	Children see throughput for 1 random readers 	=   24939.10 KB/sec
	Parent sees throughput for 1 random readers 	=   24938.90 KB/sec
	Min throughput per process 			=   24939.10 KB/sec 
	Max throughput per process 			=   24939.10 KB/sec
	Avg throughput per process 			=   24939.10 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =   24939.10 KB/sec

	Children see throughput for 1 random writers 	=  195978.80 KB/sec
	Parent sees throughput for 1 random writers 	=  174361.77 KB/sec
	Min throughput per process 			=  195978.80 KB/sec 
	Max throughput per process 			=  195978.80 KB/sec
	Avg throughput per process 			=  195978.80 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  195978.80 KB/sec
 
sys: FreeBSD | amd64 | 8.0-RELEASE-p2
mob: Intel Q35 motherboard
cpu: Intel E6320 1.86Ghz
ram: 4 GB DDR2 800MHz
hdd: Samsung F3 1TB (3x)

Code:
% zpool status
  pool: basefs
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0

errors: No known data errors

Each drive partitioned this way:
Code:
512m   ufs (root on gmirror)
  1g   swap
930g   zpool
  4g   vfat

/boot/loader.conf
[CMD=""]vfs.zfs.arc_max=128M[/CMD]

Code:
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.326 $
		Compiled for 64 bit mode.
		Build: freebsd 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

	Run began: Wed Feb 10 22:03:37 2010

	Record Size 128 KB
	File size set to 4194304 KB
	No retest option selected
	Command line used: iozone -C -t1 -r128k -s4g -i0 -i1 -i2 -+n
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 1 process
	Each process writes a 4194304 Kbyte file in 128 Kbyte records

	Children see throughput for  1 initial writers 	=   99438.62 KB/sec
	Parent sees throughput for  1 initial writers 	=   98864.46 KB/sec
	Min throughput per process 			=   99438.62 KB/sec 
	Max throughput per process 			=   99438.62 KB/sec
	Avg throughput per process 			=   99438.62 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =   99438.62 KB/sec

	Children see throughput for  1 readers 		=  103475.85 KB/sec
	Parent sees throughput for  1 readers 		=  103395.10 KB/sec
	Min throughput per process 			=  103475.85 KB/sec 
	Max throughput per process 			=  103475.85 KB/sec
	Avg throughput per process 			=  103475.85 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  103475.85 KB/sec

	Children see throughput for 1 random readers 	=    8650.13 KB/sec
	Parent sees throughput for 1 random readers 	=    8649.90 KB/sec
	Min throughput per process 			=    8650.13 KB/sec 
	Max throughput per process 			=    8650.13 KB/sec
	Avg throughput per process 			=    8650.13 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =    8650.13 KB/sec

	Children see throughput for 1 random writers 	=   84414.15 KB/sec
	Parent sees throughput for 1 random writers 	=   84224.03 KB/sec
	Min throughput per process 			=   84414.15 KB/sec 
	Max throughput per process 			=   84414.15 KB/sec
	Avg throughput per process 			=   84414.15 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =   84414.15 KB/sec



iozone test complete.
 
@Matty

I needed to reduce it since unixbench benchmark made kernel panic with defaults, maybe I will increase it to see how it influence performance.
 
Results with the same hardware but with new settings:

/boot/loader.conf
vfs.zfs.arc_max=1024M [color="gray"](I need to check if [FILE]unixbench[/FILE] does not make panic now)[/color]
vfs.zfs.prefetch_disable=0



Code:
Iozone: Performance Test of File I/O
	        Version $Revision: 3.326 $
		Compiled for 64 bit mode.
		Build: freebsd 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

	Run began: Sun Feb 14 17:34:18 2010

	Record Size 128 KB
	File size set to 4194304 KB
	No retest option selected
	Command line used: iozone -C -t1 -r128k -s4g -i0 -i1 -i2 -+n
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 1 process
	Each process writes a 4194304 Kbyte file in 128 Kbyte records

	Children see throughput for  1 initial writers 	=  169460.83 KB/sec
	Parent sees throughput for  1 initial writers 	=  156999.18 KB/sec
	Min throughput per process 			=  169460.83 KB/sec 
	Max throughput per process 			=  169460.83 KB/sec
	Avg throughput per process 			=  169460.83 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  169460.83 KB/sec

	Children see throughput for  1 readers 		=  157057.41 KB/sec
	Parent sees throughput for  1 readers 		=  156875.65 KB/sec
	Min throughput per process 			=  157057.41 KB/sec 
	Max throughput per process 			=  157057.41 KB/sec
	Avg throughput per process 			=  157057.41 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  157057.41 KB/sec

	Children see throughput for 1 random readers 	=    9479.04 KB/sec
	Parent sees throughput for 1 random readers 	=    9478.56 KB/sec
	Min throughput per process 			=    9479.04 KB/sec 
	Max throughput per process 			=    9479.04 KB/sec
	Avg throughput per process 			=    9479.04 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =    9479.04 KB/sec

	Children see throughput for 1 random writers 	=  158118.22 KB/sec
	Parent sees throughput for 1 random writers 	=  149010.88 KB/sec
	Min throughput per process 			=  158118.22 KB/sec 
	Max throughput per process 			=  158118.22 KB/sec
	Avg throughput per process 			=  158118.22 KB/sec
	Min xfer 					= 4194304.00 KB
	Child[0] xfer count = 4194304.00 KB, Throughput =  158118.22 KB/sec



iozone test complete.
 
zfs on ssd

i've made this benchmark on a ADATA 32GB ssd 500 series, Read rate up to 260MB/S Write: up to 120MB/S, FreeBSD version is 8.2-RELEASE-p3 GENERIC
Gjounal didnt work (kernel panic) and one thing to note is when using zfs system almost hangs (very unresponsive in other tasks), the typical ram usage was 1.5Gb during tests

Code:
     -------Sequential Output--------       ---Sequential Input--       --Random--
     -Per Char-    --Block---  -Rewrite--   -Per Char-    --Block---    --Seeks---
MB    K/sec %CPU   K/sec %CPU  K/sec %CPU   K/sec %CPU    K/sec %CPU     /sec %CPU

[I]ufs[/I]
8192  62122 68.2   58604  4.2  28637  6.7   70934 94.6   217208 16.3   8603.8 22.8
[I]ufs async[/I]
8192  62991 72.1   57936  4.2  29714  8.2   74173 96.4   222479 16.6   8560.7 22.8
[I]ufs async+noatime[/I]
8192  62586 71.4   56944  4.1  29829  8.4   74337 96.5   223479 16.4   9383.6 23.8
[I]ufs+su[/I]
8192  62140 71.1   56024  4.5  30733  8.3   72549 94.4   221915 16.1   9168.1 22.8
[I]ufs+su async[/I]
8192  63294 72.5   59221  4.8  28707  8.0   72880 94.6   223916 16.5   7993.1 20.1
[I]ufs+su noatime[/I]
8192  63345 72.5   59745  4.8  29388  8.2   67834 88.1   223271 16.7   8544.4 23.1
[I]ufs+su async+noatime[/I]
8192  62949 72.0   59529  4.8  28889  7.9   63622 82.8   222666 16.3   8523.2 22.0
[I]zfs[/I]
8192  81482 88.3   61524  9.9  58340 10.6   67106 87.3   227557 17.1   3760.3 11.3
[I]zfs comp=lzjb[/I]
8192  85707 91.8  464302 65.7 206292 29.2   66975 86.8   618647 35.4  10031.2 30.6
[I]zfs comp=gzip-1[/I]
8192  76810 82.4  131913 18.2 157721 21.2   70064 90.8   618953 33.1   9715.3 24.2
[I]zfs comp=gzip[/I]
8192  62788 67.4   63425  9.7  64563  9.8   64678 83.7   461306 24.3   8611.2 24.9
[I]zfs comp=gzip-9[/I]
8192  77806 83.3   60555 10.6  63580  9.8   72415 93.7   476410 26.2   9343.9 28.2
 
Back
Top