These are some benchmarks from my geli encrypted setup running ZFS. My system is pretty average so maybe these will help someone else out thinking about a similar setup. I'm curious to know if the performance seems reasonable, so any opinions welcome.
I have a root file system mirrored pool, and a 3x1.5TB raidz pool. Both running on geli encrypted providers (256bit AES).
The system is 2.2GHz Core 2 with 2GB RAM running amd64 8.0-p1. Currently all my disks are in on-board SATA channels.
pool: zroot full log
pool: zdata full log
Larger file size, pool: zdata full log
There are some really fast reads/writes from iozone with 100mb test file, i guess that is to do with caching in RAM.
I ran a few unscientific "real world" tests. These are the results of copying a 1.8GB file between the two zpools:
30630 Kbytes/sec read from zroot write to zdata (60-99% system load)
31651 Kbytes/sec read from zdata write to zroot (80-99% system load)
System load in "top" is very high during disk activity, i assume a combination of geli and zfs, although i have no comparison to a system without encryption.
Some read and writes between my windows box and zdata over Samba, same 1.8 GB file.
19182 Kbytes/sec write over Samba (~90% system load during disk activity)
26014 Kbytes/sec read over Samba (~30% system load during disk activity)
Looking at the system load i guess more RAM and more CPU power is the way to increase performance. Not that i really need to at the moment as -- for me -- 90% of large file transfers will be over Samba at far less than 26mb/sec.
Thanks for any comments.
I have a root file system mirrored pool, and a 3x1.5TB raidz pool. Both running on geli encrypted providers (256bit AES).
The system is 2.2GHz Core 2 with 2GB RAM running amd64 8.0-p1. Currently all my disks are in on-board SATA channels.
Code:
mylo# zpool status
pool: zdata
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zdata ONLINE 0 0 0
raidz1 ONLINE 0 0 0
label/raidz0 ONLINE 0 0 0
label/raidz1 ONLINE 0 0 0
label/raidz2 ONLINE 0 0 0
errors: No known data errors
pool: zroot
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror ONLINE 0 0 0
label/mirror0 ONLINE 0 0 0
label/mirror1 ONLINE 0 0 0
errors: No known data errors
pool: zroot full log
Code:
iozone -R -l 5 -u 5 -r 1024k -s 100m -F f1 f2 f3 f4 f5
"Record size = 1024 Kbytes "
"Output is in Kbytes/sec"
" Initial write " 82310.30
" Rewrite " 92564.76
" Read " 1433235.20
" Re-read " 1501567.17
" Reverse Read " 903336.27
" Stride read " 1081217.57
" Random read " 1035546.47
" Mixed workload " 1040485.66
" Random write " 83499.81
" Pwrite " 77014.28
" Pread " 1095567.33
iozone test complete.
Code:
iozone -R -l 5 -u 5 -r 1024k -s 100m -F f1 f2 f3 f4 f5
"Record size = 1024 Kbytes "
"Output is in Kbytes/sec"
" Initial write " 113448.10
" Rewrite " 103605.23
" Read " 1188466.19
" Re-read " 348667.80
" Reverse Read " 881118.20
" Stride read " 694349.38
" Random read " 870844.94
" Mixed workload " 908451.61
" Random write " 100673.13
" Pwrite " 67140.55
" Pread " 138807.33
iozone test complete.
Code:
iozone -R -l 5 -u 5 -r 1024k -s 1000m -F f1 f2 f3 f4 f5
"Record size = 1024 Kbytes "
"Output is in Kbytes/sec"
" Initial write " 60572.50
" Rewrite " 58039.86
" Read " 87605.36
" Re-read " 91687.08
" Reverse Read " 73461.97
" Stride read " 73672.66
" Random read " 73506.54
" Mixed workload " 64688.74
" Random write " 57895.67
" Pwrite " 58445.31
" Pread " 88124.95
iozone test complete.
There are some really fast reads/writes from iozone with 100mb test file, i guess that is to do with caching in RAM.
I ran a few unscientific "real world" tests. These are the results of copying a 1.8GB file between the two zpools:
30630 Kbytes/sec read from zroot write to zdata (60-99% system load)
31651 Kbytes/sec read from zdata write to zroot (80-99% system load)
System load in "top" is very high during disk activity, i assume a combination of geli and zfs, although i have no comparison to a system without encryption.
Some read and writes between my windows box and zdata over Samba, same 1.8 GB file.
19182 Kbytes/sec write over Samba (~90% system load during disk activity)
26014 Kbytes/sec read over Samba (~30% system load during disk activity)
Looking at the system load i guess more RAM and more CPU power is the way to increase performance. Not that i really need to at the moment as -- for me -- 90% of large file transfers will be over Samba at far less than 26mb/sec.
Thanks for any comments.