Reducing latency

Hi,

I recently came across this statistic:
http://www.solvedns.com/dns-comparison/2016/04

about nameserver response-times.

I realized that the best I could do was 90msec, on our colocated server without a real firewall in front.
The namservers with our ISGs in front come with about 100 to 120msec response-time (we're trying to reduce this by modifying some settings on the ISG).

I was curious if it was possible to further reduce the response-times - either by configuration-settings in BIND itself, or FreeBSD sysctls.

The servers all run FreeBSD 10.3p2-amd64.
Our own servers are HP DL 380G9, 6-core, bge(4)-NIC, 16GB RAM, UFS
The rented server in the colocation is an E3-1245V2 (QC), 16GB RAM, zfsroot, em(4)-NIC.


I've enabled cc_htcp in loader.conf (also the dns accept-filter).

I've enabled the following settings in sysctl.conf:

Code:
kern.ipc.shm_use_phys=1
kern.ipc.somaxconn=16384
kern.maxfiles=131072
kern.maxfilesperproc=104856
kern.threads.max_threads_per_proc=4096
net.inet.tcp.fast_finwait2_recycle=1
net.inet.tcp.finwait2_timeout=15000
net.inet.tcp.msl=5000
machdep.panic_on_nmi=0
net.inet6.ip6.auto_flowlabel=0
security.bsd.see_other_gids=0
security.bsd.see_other_uids=0
net.inet.ip.portrange.hifirst=10000
security.bsd.unprivileged_proc_debug=0
net.inet.ip.redirect=0
net.inet6.ip6.redirect=0
net.inet.icmp.drop_redirect=1
net.inet6.icmp6.rediraccept=0
security.bsd.hardlink_check_uid=1
security.bsd.hardlink_check_gid=1
kern.coredump=0
kern.nodump_coredump=1
net.inet.ip.random_id=1
net.inet.ip.check_interface=1
net.inet.tcp.blackhole=1
net.inet.udp.blackhole=1
security.bsd.unprivileged_read_msgbuf=0
net.inet.tcp.cc.algorithm=htcp
net.inet.tcp.cc.htcp.adaptive_backoff=1
net.inet.tcp.cc.htcp.rtt_scaling=1
net.inet.tcp.syncache.rexmtlimit=1

(they're mostly from calomel.org).


Anyone got any more ideas?
Switching out a NIC isn't that easy in this case.
 
The latency is largely depending on how many hops are in between the client and server, what type of connections (ATM has a higher latency than ethernet for example), etc. There isn't much you can do about this. And although signals can travel at roughly the speed of light (fiber optics) it still takes time to travel a certain distance (about 3.3 microseconds for each kilometer). And each router, bridge or media converter it encounters adds a bit of latency on top of that.
 
OK, so you say it's mostly a question of how good the peering of various ISPs is?
I really wonder how the top five get single-digit response times.
 
Hm. I know that anybody who is really "serious" about DNS needs anycast servers. We have too many domains to outsource to anybody - but I'm pretty sure it's not enough to build our own anycast network. I'd love to have one, just for the kick of being able to play with one and to cross it off the bucket-list ;-)
 
See if you can get your ISP or one of those DNS providers to slave your domains. That way you still have control over the domain but they will handle the requests.
 
Well, we are the ISP in this case ;-)
We just don't specialize in DNS. We have about 4k or 5k domains, so I'm not sure if it's worth setting up an Anycast network. We've also never done that. And we've also never been asked to do it, AFAIK (until very recently).

That said, some of the domains are actually quite important (in the sense that the websites are the only business of that particular customer, generating substantial amounts of money) and it would be a serious problem if someone DDoSed our DNS-servers.
I was also thinking of just having those domains slaved by people with a large Anycast infrastructure.

Fun fact: I have pre-ordered the paper version of this book:
http://shop.oreilly.com/product/0636920034148.do
originally slated for release in December 2014 - it's now supposed to be released in late August this year - but the date has been set back more often than Cricket und Liu have released an update to their classic BIND-bible...
The author owns a large DNS-provider, so he has an excuse at least.
;-)
 
Interesting book. Please keep us updated on it. I'm currently contracted to a company that deals mostly in SaaS solutions, and have to manage around 600+ Linux (the horror!) machines. We have several DNS servers as part of the infrastructure. Most of them are simple caching resolvers but we also have a bunch of authoritative masters to host customer domains. It's all running fine but I'm always interested in knowing how we could improve the infrastructure :)
 
Back
Top