Performance of Raidz2 (6 SAS)

Hi all, I'm using 6 SAS disk to create a raidz2 pool. But today when I try to run the speed test, it seems the speed of the server is not good enough. I'm using FreeBSD 9.1, anything I can do to improve ZFS performance?

Code:
root@www:~ # time mkfile 10g test
0.007u 8.741s 0:53.06 16.4%     5+1471k 7+81922io 0pf+0w
root@www:~ # time dd if=test of=/dev/null bs=100M
102+1 records in
102+1 records out
10737418240 bytes transferred in 45.998255 secs (233430991 bytes/sec)
0.000u 8.807s 0:46.01 19.1%     25+2763k 82268+1io 0pf+0w
root@www:~ # time dd if=test of=/dev/null bs=100M
102+1 records in
102+1 records out
10737418240 bytes transferred in 41.781585 secs (256989251 bytes/sec)
0.000u 8.233s 0:41.79 19.6%     25+2763k 81920+1io 0pf+0w
root@www:~ # time dd if=test of=/dev/null bs=10M
1024+0 records in
1024+0 records out
10737418240 bytes transferred in 43.710727 secs (245647213 bytes/sec)
0.007u 7.271s 0:43.71 16.6%     25+2801k 81920+1io 0pf+0w
 
meteor8488 said:
Hi all, I'm using 6 SAS disk to create a raidz2 pool. But today when I try to run the speed test, it seems the speed of the server is not good enough. I'm using FreeBSD 9.1, anything I can do to improve ZFS performance?
Can you provide some info on the drive models and controller(s) being used, as well as the amounnt of memory and CPU model in the system?

The first result below is before the cache was primed, the second after. Since your equivalent tests showed the same number of I/O operations for the first and second tests, I'd first investigate if you're allocating enough memory to ZFS. In my case, I don't have a separate L2ARC device - the results are entirely from pool + main memory.

Code:
(0:1) rz3:/sysprog/terry# time dd if=/data/test of=/dev/null bs=100m
102+0 records in
102+0 records out
10695475200 bytes transferred in 18.609073 secs (574745193 bytes/sec)
0.000u 5.402s 0:18.61 29.0%     26+1518k 81600+0io 0pf+0w

(0:2) rz3:/sysprog/terry# time dd if=/data/test of=/dev/null bs=100m
102+0 records in
102+0 records out
10695475200 bytes transferred in 3.159562 secs (3385113134 bytes/sec)
0.000u 3.150s 0:03.16 99.6%     26+1507k 0+0io 0pf+0w
 
Terry_Kennedy said:
Can you provide some info on the drive models and controller(s) being used, as well as the amounnt of memory and CPU model in the system?

The first result below is before the cache was primed, the second after. Since your equivalent tests showed the same number of I/O operations for the first and second tests, I'd first investigate if you're allocating enough memory to ZFS. In my case, I don't have a separate L2ARC device - the results are entirely from pool + main memory.

Code:
(0:1) rz3:/sysprog/terry# time dd if=/data/test of=/dev/null bs=100m
102+0 records in
102+0 records out
10695475200 bytes transferred in 18.609073 secs (574745193 bytes/sec)
0.000u 5.402s 0:18.61 29.0%     26+1518k 81600+0io 0pf+0w

(0:2) rz3:/sysprog/terry# time dd if=/data/test of=/dev/null bs=100m
102+0 records in
102+0 records out
10695475200 bytes transferred in 3.159562 secs (3385113134 bytes/sec)
0.000u 3.150s 0:03.16 99.6%     26+1507k 0+0io 0pf+0w

My server has 2* (16 Core) Xeon CPU and 32G memory, and it's using a Dell H710 RAID card. I didn't change any ZFS settings in rc.conf or sysctl.
 
Back
Top