Bind9 Forwarding & caching server

Hi there,

I am trying to setup a system whereby we have 2 DNS servers one on-site and one offsite.

The offsite one will be hosted in a data centre with a hig bandwidth internet connection and will be a recursive non-caching Bind9 setup.

The onsite server will want to forward all dns requests to the offsite server and cache the results, also if the offsite server is not available do a recursive lookup.

The problem i have is i dont know how to set this up, i have read through most of the FreeBSD bind section manual, aswell as BIND own user manual. I cant find any way of changing the way Bind caches.

Can i configure Bind9 to check its cache first and then if it hasnt got it forward the request onto another server?

Thanks in advance.
 
I don't understand why the off-site one, which has a high-bandwidth connection, should not cache the results it finds.
  1. First, it is not friendly to all other nameservers, it is consulting for the recursive lookups.
  2. Caching minimizes the MB's you consume. You pay for these MB's, especially if they exceed a certain limit, isn't it?

So IMHO both nameservers should be caching.

Can i configure Bind9 to check its cache first and then if it hasnt got it forward the request onto another server?
This is the default behaviour.

Just look at the query time of the following:
Code:
$ dig www.tnt.com

; <<>> DiG 9.4.2-P2 <<>> www.tnt.com
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58841
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.tnt.com.                   IN      A

;; ANSWER SECTION:
www.tnt.com.            [color=blue]1800[/color]    IN      CNAME   www.tnt.com.edgesuite.net.
www.tnt.com.edgesuite.net. 1800 IN      CNAME   a1939.g.akamai.net.
a1939.g.akamai.net.     20      IN      A       82.94.229.11
a1939.g.akamai.net.     20      IN      A       82.94.229.19

;; [color=blue]Query time: 614 msec[/color]
;; SERVER: 192.168.222.10#53(192.168.222.10)
;; WHEN: [color=blue]Sun Feb 28 21:20:58 2010[/color]
;; MSG SIZE  rcvd: 129
Note the query time of 614 msec. Clearly the answer was not found in the cache.
Code:
$ dig www.tnt.com 

; <<>> DiG 9.4.2-P2 <<>> www.tnt.com
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45798
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.tnt.com.                   IN      A

;; ANSWER SECTION:
www.tnt.com.            [color=blue]1792[/color]    IN      CNAME   www.tnt.com.edgesuite.net.
www.tnt.com.edgesuite.net. 1792 IN      CNAME   a1939.g.akamai.net.
a1939.g.akamai.net.     12      IN      A       82.94.229.11
a1939.g.akamai.net.     12      IN      A       82.94.229.19

;; [color=blue]Query time: 2 msec[/color]
;; SERVER: 192.168.222.10#53(192.168.222.10)
;; WHEN: [color=blue]Sun Feb 28 21:21:06 2010[/color]
;; MSG SIZE  rcvd: 129
The query time of the repeated request done 8 seconds later is only 2 msec, so clearly answered from the cache.
Notice that the TTL of 1800 from the first query has been decremented by 8 seconds to 1792. So for another 1792 seconds the nameserver will keep this answer cached.

You also can run tcpdump to watch the DNS traffic, that way you really can see, what is happening.
 
thanks for the feedback.

The idea is that the onsite server has a low bandwidth internet connection, i need to make every effort to reduce as much bandwidth as possible hence why i am looking into this.

We already have an offsite server i can use, my plan was to have the onsite server answer the request from its cache, if it doesnt have the answer it "passes the burden" onto the higher bandwidth off site server which does all of the hard work and recursivley answers the query. It then passes the request back to the onsite server which caches it and passes it onto the client.

I was under the impression that if you forwarded onto a caching dns server then you only want one server to be caching otherwise it will increase the amount of time for DNS changes to be "seen". I.E. if the domain example.com has a ttl of 10800 (3 hours) and the first server caches it, the second server then makes a request to the first server asking for example.com and so the second server caches the answer with a ttl of 10800. Lets say that the second server cached its result 2 hours after the first server. That would mean that if the owner of example.com changed the dns record a couple of minutes after server 1 cached it, the change would be seen by server 2 5 hours later, instead of the 3 hours specified by the owner of the domain.

If im wrong about this and the second server would cache the decremented ttl value then that would make my life alot easier and i would understand why you think both servers should be caching.


@DutchDaemon

Probably the main reason is i have only ever setup BIND atmitedly just to host zones never anything like this. However we will probably use this to serve our internal domain aswell
 
As far as I know the dig program issues the same DNS requests as an BIND server does. ;)

You have seen how my recursive resolving nameserver, which BTW is dnscache, answered my second dig request with a decreased TTL of 1792, not with the oringal 1800.

You now know how to test, so configure the two servers and report here whether your impression or my impression is correct ;)
 
As far as I know, a second caching server will simply inherit the (counted-down) TTL of the first caching server. So if the second server gets a TTL of 1000 seconds on a query from the first server (even though the original record had a 86400 TTL), it will simply inherit the 1000 TTL, even if it has a 10800 TTL caching setting itself.

Example:

first caching nameserver retrieves 'original record':
Code:
$ dig A www.xs4all.nl
www.xs4all.nl.          [B]28800[/B]   IN      A       194.109.6.92

second caching nameserver retrieves the same record from the first caching nameserver:
Code:
$ dig @192.168.2.1 A www.xs4all.nl
www.xs4all.nl.		[B]28786[/B]	IN	A	194.109.6.92

The TTL lost 14 seconds in the meantime.
 
Back
Top