1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

GELI Benchmarks

Discussion in 'Storage' started by Sebulon, Apr 17, 2012.

  1. Sebulon

    Sebulon Member

    Messages:
    663
    Likes Received:
    2
    Code:
    [B][U]HW[/U][/B]
    CHA: HP DL180 G6
    CPU: Xeon E5620 @ 2.40GHz
    RAM: 32GB DDR3 REG ECC
    HBA: LSI 9211 (PH13 FW)
    HDD: HP(WD) MB2000EAZNL
    Code:
    [B][U]SW[/U][/B]
    [CMD="#"]uname -a[/CMD]
    FreeBSD hostname 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30 UTC 2012     root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
    [CMD="#"]kldstat[/CMD]
    Id Refs Address            Size     Name
     1   19 0xffffffff80200000 11cd9b0  kernel
     2    1 0xffffffff813ce000 203d70   zfs.ko
     3    2 0xffffffff815d2000 5c50     opensolaris.ko
     4    1 0xffffffff81812000 51b3     tmpfs.ko
     5    1 0xffffffff81818000 ce78     geom_eli.ko
     6    2 0xffffffff81825000 1b11e    crypto.ko
     7    1 0xffffffff81841000 a4d9     zlib.ko
     8    1 0xffffffff8184c000 1a3f     aesni.ko
    Code:
    [B][U]PART[/U][/B]
    [CMD="#"]gpart create -s gpt da(0,1,2,3,4)[/CMD]
    [CMD="#"]gpart add -t freebsd-zfs -l disk(1,2,3,4,5) -b 2048 -a 4k da(0,1,2,3,4)[/CMD]
    Code:
    [B][U]GELI[/U][/B]
    [CMD="#"]dd if=/dev/random of=/boot/geli/disks.key bs=64 count=1[/CMD]
    [CMD="#"]geli init -s 4096 -K /boot/geli/disks.key -P -l (128,192,256) -e (AES-XTS,AES-CBC,Blowfish-CBC,Camellia-CBC,3DES-CBC) /dev/gpt/disk(1,2,3,4,5)[/CMD]
    [CMD="#"]geli attach -p -k /boot/geli/disks.key /dev/gpt/disk(1,2,3,4,5)[/CMD]
    Code:
    [B][U]MO[/U][/B]
    [CMD="#"]mdmfs -s 2048m md0 /mnt/ram[/CMD]
    [CMD="#"]umount /mnt/ram[/CMD]
    (Because I don´t know the [FILE]mdconfig[/FILE]-syntax to do the same thing)
    [CMD="#"]dd if=/dev/random of=/dev/md0 bs=1024000 count=2048[/CMD]
    
    [CMD="#"]dd if=/dev/md0 of=/dev/gpt/disk(1,2,3,4,5).eli bs=1024000 count=2048[/CMD]
    [CMD="#"]dd if=/dev/md0 of=/dev/gpt/disk(1,2,3,4,5).eli bs=1024000 count=2048[/CMD]
    [CMD="#"]dd if=/dev/md0 of=/dev/gpt/disk(1,2,3,4,5).eli bs=1024000 count=2048[/CMD]
    Code:
    [B][U]GELI SCORE[/U][/B]
                  [B]Bit  MB/s[/B]
    Raw                146
    AES-XTS       128  70,5
    AES-CBC       128  [B]114,4[/B] (65,5 without aesni.ko loaded)
    Blowfish-CBC  128  27,8
    Camellia-CBC  128  43,0
    
    3DES-CBC      192  14,6
    
    AES-XTS       256  67,7
    AES-CBC       256  [B]106,5[/B]
    Blowfish-CBC  256  27,8
    Camellia-CBC  256  37,6

    Proceeding by choosing the fastest GELI option (AES-CBC 128bit) and testing the performance of a filesystem on top of that.
    Code:
    [B][U]ZFS/GELI MO:[/U][/B]
    
    [CMD="#"]zpool create -O mountpoint=legacy -O compress=on tank mirror gpt/disk{1.eli,2.eli} mirror gpt/disk{3.eli,4.eli}
    mirror gpt/disk{5.eli,6.eli} mirror gpt/disk{7.eli,8.eli}[/CMD]
    [CMD="#"]mount -t zfs tank /mnt/tank/[/CMD]
    [CMD="#"]bonnie++ -d /mnt/tank/ -u 0 -s 64g[/CMD]
    Using uid:0, gid:0.
    Writing a byte at a time...done
    Writing intelligently...done
    Rewriting...done
    Reading a byte at a time...done
    Reading intelligently...done
    start 'em...done...done...done...done...done...
    Create files in sequential order...done.
    Stat files in sequential order...done.
    Delete files in sequential order...done.
    Create files in random order...done.
    Stat files in random order...done.
    Delete files in random order...done.
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    hostname          64G   138  99 383394  69 292000  63   337  98 891332  82 506.4  28
    Latency               399ms    5960ms    8642ms     168ms   31746us     182ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    hostname              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 24120  95 +++++ +++ 22920  96 21911  94 +++++ +++ 24014  97
    Latency             13852us     139us    2164us   14589us      92us     174us
    1.96,1.96,hostname,1,1334621912,64G,,138,99,383394,69,292000,63,337,98,891332,82,506.4,28,1
    6,,,,,24120,95,+++++,+++,22920,96,21911,94,+++++,+++,24014,97,399ms,5960ms,8642ms,168ms,317
    46us,182ms,13852us,139us,2164us,14589us,92us,174us
    
    [CMD="#"]zpool create -O mountpoint=legacy -O compress=on tank raidz2 gpt/disk{1.eli,2.eli,3.eli,4.eli} raidz2 gpt/disk{5.eli,6.eli,7.eli,8.eli}[/CMD]
    [CMD="#"]mount -t zfs tank /mnt/tank/[/CMD]
    [CMD="#"]bonnie++ -d /mnt/tank/ -u 0 -s 64g[/CMD]
    Using uid:0, gid:0.
    Writing a byte at a time...done
    Writing intelligently...done
    Rewriting...done
    Reading a byte at a time...done
    Reading intelligently...done
    start 'em...done...done...done...done...done...
    Create files in sequential order...done.
    Stat files in sequential order...done.
    Delete files in sequential order...done.
    Create files in random order...done.
    Stat files in random order...done.
    Delete files in random order...done.
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    hostname          64G   135  99 407474  74 297102  65   334  99 797934  72 372.7   7
    Latency             77224us    1074ms    2944ms   79237us   62840us     305ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    hostname              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 24776  98 +++++ +++ 23919  97 24210  95 15289  99  4257  98
    Latency             13907us     140us     169us   17596us     324us    7133us
    1.96,1.96,hostname,1,1334627599,64G,,135,99,407474,74,297102,65,334,99,797934,72,372.7,7,16
    ,,,,,24776,98,+++++,+++,23919,97,24210,95,15289,99,4257,98,77224us,1074ms,2944ms,79237us,62
    840us,305ms,13907us,140us,169us,17596us,324us,7133us
    
    [CMD="#"]zpool create -O mountpoint=legacy -O compress=on tank raidz2 gpt/disk{1.eli,2.eli,3.eli,4.eli,5.eli,6.eli,7.eli,8.eli}[/CMD]
    [CMD="#"]mount -t zfs tank /mnt/tank/[/CMD]
    [CMD="#"]bonnie++ -d /mnt/tank/ -u 0 -s 64g[/CMD]
    Using uid:0, gid:0.
    Writing a byte at a time...done
    Writing intelligently...done
    Rewriting...done
    Reading a byte at a time...done
    Reading intelligently...done
    start 'em...done...done...done...done...done...
    Create files in sequential order...done.
    Stat files in sequential order...done.
    Delete files in sequential order...done.
    Create files in random order...done.
    Stat files in random order...done.
    Delete files in random order...done.
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    hostname          64G   136  99 426207  76 296597  65   331  98 784113  71 313.5  19
    Latency             64337us     493ms    2187ms   94173us   53133us     286ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    hostname              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 23478  93 +++++ +++ 24146  97 22300  97 +++++ +++ 23719  97
    Latency             14004us     139us     171us   29080us     142us     170us
    1.96,1.96,hostname,1,1334624351,64G,,136,99,426207,76,296597,65,331,98,784113,71,313.5,19,1
    6,,,,,23478,93,+++++,+++,24146,97,22300,97,+++++,+++,23719,97,64337us,493ms,2187ms,94173us,
    53133us,286ms,14004us,139us,171us,29080us,142us,170us
    Code:
    [B][U]ZFS/GELI SCORE[/U][/B]
              [B]Write  Rewrite  Read[/B]  (MB/s)
    4xmirror  374    285      870
    2xraidz2  397    290      779
    1xraidz2  416    289      765

    /Sebulon
     
  2. t1066

    t1066 Member

    Messages:
    170
    Likes Received:
    2
    Have you try AES-XTS with aesni.ko loaded? It should also be hardware accelerated.
     
  3. Sebulon

    Sebulon Member

    Messages:
    663
    Likes Received:
    2
    It was. It was loaded during the whole first suite of tests. After that, I tried one more time only with AES-CBC (because it was the fastest) with the driver unloaded, just to know the difference between soft- and hardware crypto with the same algo.

    /Sebulon
     
  4. lockdoc

    lockdoc New Member

    Messages:
    124
    Likes Received:
    0
    Does the size of the key you use (in your case 64) actually affect the performance of the encryption itself?
     
  5. Sebulon

    Sebulon Member

    Messages:
    663
    Likes Received:
    2
    @lockdoc

    I think someone did ask me that before as well, but I only ever tried using the same sized key, as documented in the Handbook for setting up GELI. I figured it was best to go by the book:)

    /Sebulon
     
  6. MorgothV8

    MorgothV8 New Member

    Messages:
    86
    Likes Received:
    0
    Benchmarks are benchmarks: (copied from my post)

    I'm using geli on 10-CURRENT - processor Core i5 3450, geli in hardware
    I have mirror of two identical discs (!TB each): zpool create zmirr mirror ada1.eli ada2.eli
    Also sync=disabled, atime=off
    RAM: 16 GB, system CURRENT-10.0 from one week ago.
    Geli: AES-CBC 128bit, aesni.ko loaded

    Write/read speed in practice the same about 150-180 Mb/s
    dd bs=16M if=./some_8GB_file of=/dev/zero gives 175 MB/s
    dd bs=16M if=/dev/zero of=./some_8GB_file count=512 gives 158 Mb/s

    Second dd from the same file givers about 2,5 Gb/s (but it is fetched from ZFS ARC then)
     
  7. Sebulon

    Sebulon Member

    Messages:
    663
    Likes Received:
    2
    Amen to that brother, thanks for sharing! Although, I would really like better comparing bonnie results. Would you please install benchmarks/bonnie++ and run:
    # bonnie++ -d /some/zfs/dir -u 0(if root) -s 32g

    /Sebulon
     
  8. MorgothV8

    MorgothV8 New Member

    Messages:
    86
    Likes Received:
    0
    OK this is output:

    install -o root -g wheel -m 444 bonnie++.8 zcav.8 getc_putc.8 /usr/local/man/man8
    install -o root -g wheel -m 444 /usr/ports/benchmarks/bonnie++/work/bonnie++-1.96/readme.html /u sr/local/share/doc/bonnie++
    ===> Compressing manual pages for bonnie++-1.96_1
    ===> Registering installation for bonnie++-1.96_1
    Installing bonnie++-1.96_1... done
    ===> Cleaning for bonnie++-1.96_1
    root@darkstar /usr/ports/benchmarks/bonnie++$ cd /data/
    root@darkstar /data$ mkdir tmp
    root@darkstar /data$ cd tmp
    root@darkstar /data/tmp$ bonnie++ -d /data/tmp/ -u 0 -s 32g
    Using uid:0, gid:0.
    Writing a byte at a time...done
    Writing intelligently...done
    Rewriting...done
    Reading a byte at a time...done
    Reading intelligently...done
    start 'em...done...done...done...done...done...
    Create files in sequential order...done.
    Stat files in sequential order...done.
    Delete files in sequential order...done.
    Create files in random order...done.
    Stat files in random order...done.
    Delete files in random order...done.
    Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
    Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    darkstar 32G 208 99 132266 15 78895 10 458 92 220835 9 228.9 1
    Latency 44213us 721ms 1107ms 474ms 310ms 275ms
    Version 1.96 ------Sequential Create------ --------Random Create--------
    darkstar -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    16 26956 68 7332 9 18348 99 +++++ +++ +++++ +++ +++++ +++
    Latency 57745us 767ms 339us 11919us 87us 216us
    1.96,1.96,darkstar,1,1351361104,32G,,208,99,132266,15,78895,10,458,92,220835,9,228.9,1,16,,,,,26956,68,7332,9,18348,99,+++++,+++,+++++,+++,+++++,+++,44213us,721ms,1107ms,474ms,310ms,275ms,57745us,767ms,339us,11919us,87us,216us
    root@darkstar /data/tmp$
     
  9. MorgothV8

    MorgothV8 New Member

    Messages:
    86
    Likes Received:
    0
    BTW: is there any chance to work for FreeBSD? I've been working for Poland, Norway, USA up today, I've got own Company - and I I can work for My favorit OS (from '99s) as a voulntier.... :p
     
  10. jb_fvwm2

    jb_fvwm2 Member

    Messages:
    1,580
    Likes Received:
    1
    Fixup unmaintained ports that have open PR's ?? (The Handbook may give a more precise answer...or even freebsd.org, a link from there.)
     
  11. Sebulon

    Sebulon Member

    Messages:
    663
    Likes Received:
    2
    @MorgothV8

    Code:
                [B]Write   Rewrite    Read[/B]    (MB/s)
    1x mirror   129     77         215
    That is awesome! You get write performance of 1x HD and read performance of x2. No penalty, either from ZFS, or GELI.

    /Sebulon
     
  12. MorgothV8

    MorgothV8 New Member

    Messages:
    86
    Likes Received:
    0
    Looks so.
    It rally works great on my current setup.
    ARC consumes 12 GB of RAM, but seems to be quite clever, fast and efficient :)
    And if I need M$ Windoze - VirtualBox runs Win 7 & like a charm :) I think faster than native - Windows VDI file is just in ARC and *rocks* !
     
  13. listentoreason

    listentoreason New Member

    Messages:
    37
    Likes Received:
    0
    If I am reading this correctly, you are creating a RAM disk (md0) with 2 GB of reasonably-cryptographically random data, and then using that disk as a fast cache to pre-fill your GELI devices with that random data. I have seen this step (without the RAM disk part) before in GELI HowTo's, and my presumption is that it helps hide the amount and boundaries of your real, encrypted data (which look mathematically random) from the /dev/random "background" on which it resides. However, it looks like you're putting the same random data on all five disks, since you're reusing the pre-generated RAM disk. I'd think that an attacker sophisticated enough to consider trying to use the "real data footprint" to aid in their attack (or simply disprove your frantic assertion that you've stored nothing there) would be able to compare the five devices and note when the "common randomness" ends.

    Is my understanding of the utility of an initial "random fill" correct? Also, it looks like you perform the dd overwrite three times; I presume here you're just wiping away past data on the disk? Is this for cleanliness, or is there a concern that prior data/metadata could interfere with the operation of the file system or GELI?

    Thanks! I covet your benchmarks; I didn't see a smilie for "drool", I guess this is closest: :pP
     
  14. Sebulon

    Sebulon Member

    Messages:
    663
    Likes Received:
    2
    @listentoreason,

    No, god no, it´s got nothing to do with that at all:) I´m prefilling an md-device to use to benchmark the raw partitions with dd before making any assumptions on how it´ll play out with a filesystem on top. You can´t rely on /dev/zero since there´s no effort writing just a bunch of zero's, so you make a disk in the RAM and prefill it with gibberish just to have something better than zeroes to benchmark with. And always run a benchmark at least three times to account for standard deviation. You always post the "middle" score of the three.

    The absolutely most important part after making sure you´ve gotten AES-NI loaded is how you initialise the GELI devices:

    # dd if=/dev/random of=/boot/geli/disks.key bs=64 count=1
    # geli init -s 4096 -K /boot/geli/disks.key -P -l 128 -e AES-CBC /dev/gpt/diskX

    Since, as you can see from the raw device benchmarking, performance varies heavily from one algorithm to another. Make sure to use 128-bit AES-CBC and you should have just as bad-ass performance as me. Perhaps even better since I was only using eight drives and you have twelve, if I remember correctly.

    /Sebulon
     
    Last edited by a moderator: Oct 16, 2014
  15. listentoreason

    listentoreason New Member

    Messages:
    37
    Likes Received:
    0
    Oh! I forgot that dd reports transfer speed after it completes. :r
    Along those lines; I am not using a key file, I perceived it as an additional hard-core defense of the provision ("You may have beaten the passphrase out of me, but by now my partner has completely dissolved the microSD card with the only copy of the key file in her car battery!"). If it were to sit in plaintext on the system (and therefore be readily available to a thief) would it still provide additional benefit to GELI?
     
  16. fonz

    fonz Moderator Staff Member Moderator

    Messages:
    2,444
    Likes Received:
    7
    It's not supposed to.
     
  17. listentoreason

    listentoreason New Member

    Messages:
    37
    Likes Received:
    0
    I ask because many of the examples I've read show the key file being generated on a local file system. The man page does include examples with /mnt/pendrive, and after careful re-reading I see that /boot/ is described as:
    The disk encryption documentation however just suggests making the key file on /root/, and does not suggest that somehow /root/ should be physically separated from the system (can it even?). From your comment I'd presume that approach would not useful in protecting the provision, and that any security would really only be coming from the passphrase in that example?
     
  18. Sebulon

    Sebulon Member

    Messages:
    663
    Likes Received:
    2
    It´s not really about that for us in a general fileserver type of application. Rather a time-saver of not having to deal with safe destruction of data when a drive has failed. I know many others that spend an enormous amount of time trying their hardest at making sure they are able to say they have safely destructed the data on the failed hard drive because many people are paranoid about that stuff. We just yank it out and toss it since the data´s always been encrypted. Good luck rummaging through that garbage:)

    /Sebulon
     
  19. listentoreason

    listentoreason New Member

    Messages:
    37
    Likes Received:
    0
    Thanks, that's an interesting utility I hadn't considered. If you have a physically secure data center then burglary is presumably a lower risk than a home user.