Hi all!
Current Highscore
I´ve been experimenting with a high performance NAS based on FreeBSD and ZFS. I have some questions about NFS write performance together with SSD ZIL-accelerators. First specs:
Hardware
Supermicro X7SBE
Intel Core2 Duo 2,13GHz
8GB 667MHz RAM
3x Lycom SATA II PCI-X controllers
2x Supermicro CSE-M35T-1
I´ve partitioned the ssd´s to be used as both ZIL and L2ARC devices with mirroring, but only using the ZIL-partitions for the moment. I noticed a rather big performance hit while using them as both at the same time- about 30% drop in fact.
Now for local performance:
So just writing zeros are OK 228MB/s, but when trying to write some random data, like in every day use, no matter of how, when and where tops at about 45-50MB/s. Wierd.
Accordingly, I have that performance through NFS- about 30MB/s write speed.
Am I missing something? The data sheet for SSDs boasts with 50,000 4k random write IOPS, and I had thought that I would at least get 100MB/s NFS write?
/Sebulon
Current Highscore
Code:
[B]Local writes 4k[/B] 5.4k rpm 160GB = 32MB/s OCZ Vertex 2 60GB = 33MB/s Intel 320 120GB = 52MB/s Zeus IOPS 16GB = 55MB/s OCZ Vertex 2 120GB = 56MB/s Intel S3700 200GB = 58MB/s HP 10k SAS 146GB = 59MB/s SAVVIO 15k.2 146GB = 60MB/s OCZ Deneva 2 200GB = 60MB/s OCZ Vertex 3 240GB = 61MB/s Intel X25-E 32GB = 72MB/s [B]Local writes 128k[/B] OCZ Vertex 2 60GB = 51MB/s OCZ Vertex 2 120GB = 61MB/s HP 10k SAS 146GB = 101MB/s Intel 320 120GB = 128MB/s Zeus IOPS 16GB = 133MB/s SAVVIO 15k.2 146GB = 165MB/s Intel X25-E 32GB = 197MB/s OCZ Vertex 3 240GB = 271MB/s OCZ Deneva 2 200GB = 284MB/s Intel S3700 200GB = 295MB/s
Code:
[B][U]NFS Mirrored ZIL[/U][/B] [B]Ordinary HW[/B] Intel 320 40GB = 30MB/s OCZ Vertex 2 60GB = 32MB/s OCZ Vertex 2 120GB = 36MB/s Intel 320 120GB = 52MB/s Zeus IOPS 16GB = 55MB/s Intel X25-E 32GB = 60MB/s Intel S3700 200GB = 65MB/s OCZ Deneva 2 200GB = 67MB/s OCZ Vertex 3 240GB = 70MB/s [B]HP DL380 G5[/B] (default controller settings) Controller write cache = on Drive write cache = off OCZ Vertex 2 120GB = 49MB/s Intel 320 120GB = 52MB/s SAVVIO 15k.2 146GB = 56MB/s HP 10k SAS 146GB = 58MB/s Intel X25-E 32GB = 67MB/s
I´ve been experimenting with a high performance NAS based on FreeBSD and ZFS. I have some questions about NFS write performance together with SSD ZIL-accelerators. First specs:
Hardware
Supermicro X7SBE
Intel Core2 Duo 2,13GHz
8GB 667MHz RAM
3x Lycom SATA II PCI-X controllers
2x Supermicro CSE-M35T-1
Code:
# camcontrol devlist <WDC WD30EZRS-00J99B0 80.00A80> at scbus0 target 0 lun 0 (ada0,pass0) <SAMSUNG HD103SJ 1AJ10001> at scbus1 target 0 lun 0 (ada1,pass1) <SAMSUNG HD103SJ 1AJ10001> at scbus2 target 0 lun 0 (ada2,pass2) <SAMSUNG HD103SJ 1AJ10001> at scbus5 target 0 lun 0 (ada3,pass3) <SAMSUNG HD103SJ 1AJ10001> at scbus6 target 0 lun 0 (ada4,pass4) <SAMSUNG HD103SJ 1AJ10001> at scbus7 target 0 lun 0 (ada5,pass5) <SAMSUNG HD103SJ 1AJ10001> at scbus8 target 0 lun 0 (ada6,pass6) <SAMSUNG HD103SJ 1AJ10001> at scbus9 target 0 lun 0 (ada7,pass7) <SAMSUNG HD103SJ 1AJ10001> at scbus10 target 0 lun 0 (ada8,pass8) <OCZ-VERTEX2 1.29> at scbus12 target 0 lun 0 (ada9,pass9) <OCZ-VERTEX2 1.29> at scbus13 target 0 lun 0 (ada10,pass10)
Code:
# zpool status pool: pool1 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool1 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 label/rack-1:2 ONLINE 0 0 0 label/rack-1:3 ONLINE 0 0 0 label/rack-1:4 ONLINE 0 0 0 label/rack-1:5 ONLINE 0 0 0 label/rack-2:1 ONLINE 0 0 0 label/rack-2:2 ONLINE 0 0 0 label/rack-2:3 ONLINE 0 0 0 label/rack-2:4 ONLINE 0 0 0 logs mirror ONLINE 0 0 0 gpt/ssd-1:1 ONLINE 0 0 0 gpt/ssd-2:1 ONLINE 0 0 0 errors: No known data errors pool: pool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 label/rack-1:1 ONLINE 0 0 0 errors: No known data errors
I´ve partitioned the ssd´s to be used as both ZIL and L2ARC devices with mirroring, but only using the ZIL-partitions for the moment. I noticed a rather big performance hit while using them as both at the same time- about 30% drop in fact.
Code:
# gpart show ada9 => 34 117231341 ada9 GPT (55G) 34 30 - free - (15k) 64 33554432 1 freebsd-zfs (16G) 33554496 83676879 2 freebsd-zfs (39G) # gpart show ada10 => 34 117231341 ada10 GPT (55G) 34 30 - free - (15k) 64 33554432 1 freebsd-zfs (16G) 33554496 83676879 2 freebsd-zfs (39G)
Now for local performance:
Code:
# dd if=/dev/random of=/tmp/test16GB.bin bs=1m count=16384 # dd if=/tmp/test16GB.bin of=/dev/zero bs=4096 seek=$RANDOM 17179869184 bytes transferred in 50.711092 secs (338779318 bytes/sec) # dd if=/tmp/test16GB.bin of=/dev/gpt/ssd-1\:2 bs=4k seek=$RANDOM 17179869184 bytes transferred in 381.576081 secs (45023444 bytes/sec) # dd if=/dev/zero of=/dev/gpt/ssd-1\:2 bs=4k seek=$RANDOM 42738441728 bytes transferred in 623.003697 secs (68600623 bytes/sec) In comparison to: # newfs -b 32768 /dev/gpt/ssd-1\:2 # mount /dev/gpt/ssd-1\:2 /mnt/ssd/ # dd if=/tmp/test16GB.bin of=/mnt/ssd/test16GB.bin bs=1m 17179869184 bytes transferred in 348.755907 secs (49260439 bytes/sec) And also: # dd if=/dev/zero of=/dev/gpt/ssd-1\:2 bs=1m 42842562048 bytes transferred in 187.442289 secs (228564015 bytes/sec)
So just writing zeros are OK 228MB/s, but when trying to write some random data, like in every day use, no matter of how, when and where tops at about 45-50MB/s. Wierd.
Accordingly, I have that performance through NFS- about 30MB/s write speed.
Am I missing something? The data sheet for SSDs boasts with 50,000 4k random write IOPS, and I had thought that I would at least get 100MB/s NFS write?
/Sebulon