ZFS benchmark on geli encrypted providers

General questions about the FreeBSD operating system. Ask here if your question does not fit elsewhere.

ZFS benchmark on geli encrypted providers

Postby phatfish » 30 Dec 2009, 16:30

These are some benchmarks from my geli encrypted setup running ZFS. My system is pretty average so maybe these will help someone else out thinking about a similar setup. I'm curious to know if the performance seems reasonable, so any opinions welcome.

I have a root file system mirrored pool, and a 3x1.5TB raidz pool. Both running on geli encrypted providers (256bit AES).

The system is 2.2GHz Core 2 with 2GB RAM running amd64 8.0-p1. Currently all my disks are in on-board SATA channels.

Code: Select all
mylo# zpool status
  pool: zdata
 state: ONLINE
 scrub: none requested
config:

        NAME              STATE     READ WRITE CKSUM
        zdata             ONLINE       0     0     0
          raidz1          ONLINE       0     0     0
            label/raidz0  ONLINE       0     0     0
            label/raidz1  ONLINE       0     0     0
            label/raidz2  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
 scrub: none requested
config:

        NAME               STATE     READ WRITE CKSUM
        zroot              ONLINE       0     0     0
          mirror           ONLINE       0     0     0
            label/mirror0  ONLINE       0     0     0
            label/mirror1  ONLINE       0     0     0

errors: No known data errors


pool: zroot full log
Code: Select all
iozone -R -l 5 -u 5 -r 1024k -s 100m -F f1 f2 f3 f4 f5

"Record size = 1024 Kbytes "
"Output is in Kbytes/sec"
"  Initial write "   82310.30
"        Rewrite "   92564.76
"           Read " 1433235.20
"        Re-read " 1501567.17
"   Reverse Read "  903336.27
"    Stride read " 1081217.57
"    Random read " 1035546.47
" Mixed workload " 1040485.66
"   Random write "   83499.81
"         Pwrite "   77014.28
"          Pread " 1095567.33
iozone test complete.

pool: zdata full log
Code: Select all
iozone -R -l 5 -u 5 -r 1024k -s 100m -F f1 f2 f3 f4 f5

"Record size = 1024 Kbytes "
"Output is in Kbytes/sec"
"  Initial write "  113448.10
"        Rewrite "  103605.23
"           Read " 1188466.19
"        Re-read "  348667.80
"   Reverse Read "  881118.20
"    Stride read "  694349.38
"    Random read "  870844.94
" Mixed workload "  908451.61
"   Random write "  100673.13
"         Pwrite "   67140.55
"          Pread "  138807.33
iozone test complete.

Larger file size, pool: zdata full log
Code: Select all
iozone -R -l 5 -u 5 -r 1024k -s 1000m -F f1 f2 f3 f4 f5

"Record size = 1024 Kbytes "
"Output is in Kbytes/sec"
"  Initial write "   60572.50
"        Rewrite "   58039.86
"           Read "   87605.36
"        Re-read "   91687.08
"   Reverse Read "   73461.97
"    Stride read "   73672.66
"    Random read "   73506.54
" Mixed workload "   64688.74
"   Random write "   57895.67
"         Pwrite "   58445.31
"          Pread "   88124.95
iozone test complete.


There are some really fast reads/writes from iozone with 100mb test file, i guess that is to do with caching in RAM.

I ran a few unscientific "real world" tests. These are the results of copying a 1.8GB file between the two zpools:

30630 Kbytes/sec read from zroot write to zdata (60-99% system load)
31651 Kbytes/sec read from zdata write to zroot (80-99% system load)

System load in "top" is very high during disk activity, i assume a combination of geli and zfs, although i have no comparison to a system without encryption.

Some read and writes between my windows box and zdata over Samba, same 1.8 GB file.

19182 Kbytes/sec write over Samba (~90% system load during disk activity)
26014 Kbytes/sec read over Samba (~30% system load during disk activity)

Looking at the system load i guess more RAM and more CPU power is the way to increase performance. Not that i really need to at the moment as -- for me -- 90% of large file transfers will be over Samba at far less than 26mb/sec.

Thanks for any comments.
phatfish
Junior Member
 
Posts: 14
Joined: 05 Dec 2008, 10:45

Postby gkontos » 31 Dec 2009, 08:40

Those are nice results. I have a similar setup without the encryption. During samba copy (large files) from windows machines to my raidz1 I get 100% CPU usage. I would be curious to see how your box reacts during large file transfers (100GB) given the fact that you only have 2G of ram. Imagine that in my box, 4G, I would get a lot of panics so I had to limit the ARC size.

Regards,

George
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

Postby phatfish » 31 Dec 2009, 18:41

I'll be copying some bigger files around soon (not quite 100G tho), so i'll see how it behaves.

I would like to upgrade my RAM to 4GB, as it's a cheap upgrade. That would also enable Prefetch because it is off by default with less than 4GB RAM, i'll see what effect it has when i do.

One thing that is annoying, is that because writes are so intensive, while the system load is nearing 100% everything else slows very noticeably. I was a little surprised that FreeBSD didn't allow other programs running more processor time.

I would prefer more CPU time to go to other processes so they continue to be reasonably responsive. I assume its possible to tune the system to increase the priority of user-land processes? Have have X11 running on this box with some simple applications.
phatfish
Junior Member
 
Posts: 14
Joined: 05 Dec 2008, 10:45


Return to General

Who is online

Users browsing this forum: No registered users and 0 guests