Inspired by well written posts here, I decided to post the performance of my raidz2 vdev and ask a couple of questions.
Disks are Samsung SpinPoint F4 2TB each, and as far as I know they are advanced format drives which should be zfs ashift-ed to 4K, which is what I did at the time.
Disks were fed as a whole to zfs, they were not partitioned to start at a 1MB boundary. Reading one of previous posts (which eludes me right now) by Phoenix, he suggested that disks should be partitioned nonetheless, rather than given as a whole to zfs.
I did say before that this is just a storage server which need only saturate GigE link, but I'm curious if the performance could be better.
So here are the results:
Configuration is Intel i7 920, with 12GB RAM. Also, very important, I use geli AES-XTS 256 on all disks. I noticed during test that CPU goes to 90%, making me think that encryption is limiting factor here. However, I'd still like to know if performance could be better here despite encryption.
Is it a mistake that whole disks are fed to zfs rather than partitions, and, would separate SSD for ZIL and L2ARC bring significant difference here or is performance limited purely by geli?
Any opinion welcomed.
Code:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
label/disk0.eli ONLINE 0 0 0
label/disk1.eli ONLINE 0 0 0
label/disk2.eli ONLINE 0 0 0
label/disk3.eli ONLINE 0 0 0
label/disk4.eli ONLINE 0 0 0
label/disk5.eli ONLINE 0 0 0
Disks are Samsung SpinPoint F4 2TB each, and as far as I know they are advanced format drives which should be zfs ashift-ed to 4K, which is what I did at the time.
Code:
zdb | grep ashift
ashift: 12
Disks were fed as a whole to zfs, they were not partitioned to start at a 1MB boundary. Reading one of previous posts (which eludes me right now) by Phoenix, he suggested that disks should be partitioned nonetheless, rather than given as a whole to zfs.
I did say before that this is just a storage server which need only saturate GigE link, but I'm curious if the performance could be better.
So here are the results:
# bonnie++ -d /storage/test -u 0:0 -s 24g
Code:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
zhenbox.home.do 24G 140 99 97651 17 60965 16 347 89 215720 18 126.9 4
Latency 361ms 875ms 2083ms 350ms 204ms 890ms
Version 1.96 ------Sequential Create------ --------Random Create--------
zhenbox.home.domain -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 17048 95 +++++ +++ 9910 93 10899 98 20674 100 6289 95
Latency 9517us 176us 44776us 12602us 111us 61520us
1.96,1.96,zhenbox.home.domain,1,1335966952,24G,,140,99,97651,17,60965,16,347,89,215720,18,126.9,4,16
,,,,,17048,95,+++++,+++,9910,93,10899,98,20674,100,6289,95,361ms,875ms,2083ms,350ms,204ms,890ms,9517us
,176us,44776us,12602us,111us,61520us
Configuration is Intel i7 920, with 12GB RAM. Also, very important, I use geli AES-XTS 256 on all disks. I noticed during test that CPU goes to 90%, making me think that encryption is limiting factor here. However, I'd still like to know if performance could be better here despite encryption.
Is it a mistake that whole disks are fed to zfs rather than partitions, and, would separate SSD for ZIL and L2ARC bring significant difference here or is performance limited purely by geli?
Any opinion welcomed.