slow/erratic downloads from fileserver

Hi, I'm relatively new to FreeBSD proper (have used NAS4Free before) and have set up a fileserver serving downloads from a zfs zpool ranging from a few hundred KB to a few gigabytes for a friend (majority of the use tends towards the larger files)

The server is running FreeBSD-10.2 with Nginx webserver using AIO and sendfile. Users are complaining of very poor download speeds, and my own testing I have noticed it start off high (~3MB/s), then deteriorate to around 25kB/s before working its way back up, sometimes more than once.

The networking hardware appears to be an Intel Pro/1000 using the em(4) driver. A dd test on the zpool suggests it's nowhere near being strained, and iftop shows the network barely being used. Using another server in I've been able to get 101MB/s so the capability is certainly there.

Here are the tcp networking sysctls, is there anything that stands out as wrong?

Code:
sysctl net.inet.tcp
net.inet.tcp.rfc1323: 1
net.inet.tcp.mssdflt: 536
net.inet.tcp.keepidle: 7200000
net.inet.tcp.keepintvl: 75000
net.inet.tcp.sendspace: 65535
net.inet.tcp.recvspace: 65535
net.inet.tcp.keepinit: 75000
net.inet.tcp.delacktime: 100
net.inet.tcp.v6mssdflt: 1220
net.inet.tcp.nolocaltimewait: 0
net.inet.tcp.maxtcptw: 27767
net.inet.tcp.per_cpu_timers: 0
net.inet.tcp.v6pmtud_blackhole_mss: 1220
net.inet.tcp.pmtud_blackhole_mss: 1200
net.inet.tcp.pmtud_blackhole_failed: 0
net.inet.tcp.pmtud_blackhole_activated_min_mss: 0
net.inet.tcp.pmtud_blackhole_activated: 0
net.inet.tcp.pmtud_blackhole_detection: 0
net.inet.tcp.rexmit_drop_options: 0
net.inet.tcp.keepcnt: 8
net.inet.tcp.finwait2_timeout: 60000
net.inet.tcp.fast_finwait2_recycle: 0
net.inet.tcp.always_keepalive: 1
net.inet.tcp.rexmit_slop: 200
net.inet.tcp.rexmit_min: 30
net.inet.tcp.msl: 30000
net.inet.tcp.syncache.rst_on_sock_fail: 1
net.inet.tcp.syncache.rexmtlimit: 3
net.inet.tcp.syncache.hashsize: 512
net.inet.tcp.syncache.count: 0
net.inet.tcp.syncache.cachelimit: 15375
net.inet.tcp.syncache.bucketlimit: 30
net.inet.tcp.syncookies_only: 0
net.inet.tcp.syncookies: 1
net.inet.tcp.soreceive_stream: 0
net.inet.tcp.isn_reseed_interval: 0
net.inet.tcp.icmp_may_rst: 1
net.inet.tcp.pcbcount: 534
net.inet.tcp.do_tcpdrain: 1
net.inet.tcp.tcbhashsize: 524288
net.inet.tcp.log_debug: 0
net.inet.tcp.minmss: 216
net.inet.tcp.sack.globalholes: 23
net.inet.tcp.sack.globalmaxholes: 65536
net.inet.tcp.sack.maxholes: 128
net.inet.tcp.sack.enable: 1
net.inet.tcp.reass.overflows: 0
net.inet.tcp.reass.cursegments: 0
net.inet.tcp.reass.maxsegments: 255000
net.inet.tcp.sendbuf_max: 2097152
net.inet.tcp.sendbuf_inc: 8192
net.inet.tcp.sendbuf_auto: 1
net.inet.tcp.tso: 1
net.inet.tcp.path_mtu_discovery: 1
net.inet.tcp.recvbuf_max: 2097152
net.inet.tcp.recvbuf_inc: 16384
net.inet.tcp.recvbuf_auto: 1
net.inet.tcp.insecure_rst: 0
net.inet.tcp.ecn.maxretries: 1
net.inet.tcp.ecn.enable: 0
net.inet.tcp.abc_l_var: 2
net.inet.tcp.rfc3465: 1
net.inet.tcp.experimental.initcwnd10: 1
net.inet.tcp.rfc3390: 1
net.inet.tcp.rfc3042: 1
net.inet.tcp.drop_synfin: 0
net.inet.tcp.delayed_ack: 1
net.inet.tcp.blackhole: 0
net.inet.tcp.log_in_vain: 0
net.inet.tcp.hostcache.purge: 0
net.inet.tcp.hostcache.prune: 300
net.inet.tcp.hostcache.expire: 3600
net.inet.tcp.hostcache.count: 0
net.inet.tcp.hostcache.bucketlimit: 30
net.inet.tcp.hostcache.hashsize: 512
net.inet.tcp.hostcache.cachelimit: 0
net.inet.tcp.cc.htcp.rtt_scaling: 0
net.inet.tcp.cc.htcp.adaptive_backoff: 0
net.inet.tcp.cc.available: newreno, htcp
net.inet.tcp.cc.algorithm: htcp

Many thanks for any help you can offer
 
Try turning off the sendfile option in nginx. Sometimes sendfile improves things, sometimes it doesn't. Doesn't hurt to try either way ;)
 
How big are the files you're serving? You want to try tuning the zfs recordsize (downward) on the zpool where the files are stored.
 
we are testing sendfile settings and I will report back after we've had time to see what happens.

Jeckt: the files vary in size from hundreds of kilobytes to three or four gigabytes and all sorts in between.
 
Changing the sendfile setting has no noticeable effect, connections seem to be capping out at around 8mbit largely, according to iftop. Sometimes they go higher, but not often. Iperf managed 217mbit, and ive managed to get around that downloading to another server, but for clients they all seem to be getting bad speeds with very few exceptions.
 
Back
Top