Benchmark of ZFS on geli on a single SSD

The following benchmarks were done some time ago. I hope it would shed some lights on performance of ZFS on geli.

Relevant hardware:

CPU: E3-1230
SSD: Plextor PX-128M2P

Benchmark software: benchmarks/bonnie

Results of ZFS on the SSD without encryption:

Code:
Version      1.96   ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
aaa.local       32G   203  99 272466  38 169484  29   538  99 458453  32  2301  72
Latency               139ms   21429us     643ms   18865us     244ms   32009us
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
aaa.local        16 32035  95 +++++ +++ +++++ +++ 32243  98 +++++ +++ +++++ +++
Latency             11082us      87us      99us   21944us      24us      56us

Results of ZFS on the SSD with geli (Hardware acceleration AES-XTS 128):

Code:
Version      1.96   ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
aaa.local       32G   212  99 160276  21 86818  13   528  99 192479  12  1869  17
Latency               185ms     827ms    2452ms   23599us     355ms     216ms
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
aaa.local        16 32383  92 +++++ +++ +++++ +++ 31316  97 +++++ +++ 32624  98
Latency             12228us     100us     793us   23288us      32us     815us

By looking at gstat, I could see that while /dev/gpt/ssd0 was only 50% busy, /dev/gpt/ssd0.eli was 100% busy. Also, top showed that g_eli[0] gpt/ssd0 was using 100% of a single CPU thread. So, it can be inferred that hardware accelerated geli encryption is a single thread process.
 
Back
Top