So, I've looked around for a few days (actually months, but I never got around to really sitting down and troubleshooting) and haven't been able to find anything that applies to my setup/configuration so thought I'd seek any advice any folks have in regard to ZFS read performance. My contiguous read performance is pretty bad, hovering at around 50MB/s for any non-cached read. Cached reads, as you might guess, are extremely fast. But the usage of this storage system for me is to copy large files back and forth. I've tried troubleshooting around but haven't found the solution. Writes are faster than reads hovering in the 100+ MB/s range (and since I'm Gigabit ethernet limited is more than good for me).
My pool is a v28 raidz2 of 5 drives, each 2TB. In case it might make a difference, four of the drives are SAMSUNG HD204UI 1AQ10001, the last one is SAMSUNG HD203WI 1AN10002.
I've used iozone as my benchmarking utility and ran three different types of tests (varying the number of threads and the size of the file). The results for what I see on a lot of forums (32 threads, 40960 file size) performs well, but since it didn't fit my usage of the drives, I also did 5 threads, 1G files and 1 thread, 10G file.
Here's the three sets of test:
http://pastebin.com/wpfDrLN9
http://pastebin.com/0jFSxB4N
http://pastebin.com/QJhW6kz5
One thing I'll note is on the third test (1Thread, 10G file) gstat never showed the %busy of any of the drives in the zpool going past 30% on any read operations (write operations put the %busy's to 100% [in bursts, as expected])
Part of my dmesg:
Relevant line in /etc/sysctl.conf:
Relevant lines in /boot/loader.conf:
Any help would be greatly appreciated.
My pool is a v28 raidz2 of 5 drives, each 2TB. In case it might make a difference, four of the drives are SAMSUNG HD204UI 1AQ10001, the last one is SAMSUNG HD203WI 1AN10002.
I've used iozone as my benchmarking utility and ran three different types of tests (varying the number of threads and the size of the file). The results for what I see on a lot of forums (32 threads, 40960 file size) performs well, but since it didn't fit my usage of the drives, I also did 5 threads, 1G files and 1 thread, 10G file.
Here's the three sets of test:
http://pastebin.com/wpfDrLN9
http://pastebin.com/0jFSxB4N
http://pastebin.com/QJhW6kz5
One thing I'll note is on the third test (1Thread, 10G file) gstat never showed the %busy of any of the drives in the zpool going past 30% on any read operations (write operations put the %busy's to 100% [in bursts, as expected])
Part of my dmesg:
Code:
FreeBSD 9.0-RELEASE-p3 #0: Tue Jun 12 02:52:29 UTC 2012
[email]root@amd64-builder.daemonology.net[/email]:/usr/obj/usr/src/sys/GENERIC amd64
CPU: Intel(R) Pentium(R) Dual CPU E2160 @ 1.80GHz (1804.13-MHz K8-class CPU)
Origin = "GenuineIntel" Id = 0x6fd Family = 6 Model = f Stepping = 13
Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,
DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
Features2=0xe39d<SSE3,DTES64,MON,DS_CPL,EST,TM2,SSSE3,CX16,xTPR,PDCM>
AMD Features=0x20100800<SYSCALL,NX,LM>
AMD Features2=0x1<LAHF>
TSC: P-state invariant, performance statistics
real memory = 9395240960 (8960 MB)
avail memory = 8235347968 (7853 MB)
Code:
kern.maxvnodes=250000
Relevant lines in /boot/loader.conf:
Code:
vfs.zfs.prefetch_disable="1"
vfs.zfs.txg.timeout="5"
vm.kmem_size=9G
Any help would be greatly appreciated.