Hey You Guys,
I'm posting here because I spent a little over 4 years working on a firewall problem, and I ultimately found a solution which relies on FreeBSD's PF firewall service as the engine. Now the funny thing is that I haven't actually implemented the solution with PF yet, but that is only because the final steps seem trivial and my immediate need to work on it went away. Therefore the final solution is not empirically proven, but I feel safe in taking it for granted because I choose to believe that PF supports "IP tables" in the way that claims to in the documentation. My development testing was done using a SonicWall TZ300, which worked great under low load but was prone to glitching under higher load because its internal DNS cache implementation was not well-integrated with its "IP tables" implementation.
Four years of concentrated effort is a lot for me. I had three generations of experimental code until I fully understood the problem and the essential solution was clear. Someone else with more pre-knowledge of the DNS would have got it quicker, but I feel that the technique needs to be considered widely. I did a fair share of research on forums looking for solutions before I committed to four years of development, but I was not able to even find anyone asking the same questions -- which is usually a good sign that your thinking is off-track, but in my estimation I was on the right track whereas the sysadmins whose threads I had searched were just not concerned enough with the problem.
The problem is this- technology has unpatched security vulnerabilities at every software and hardware level, and the only real defenses are isolation, segmentation, and deny-by-default. When the internet was simpler, it was always possible, and practical in many cases, while necessary only in some, to configure your firewall to deny all outbound connections by default, excepting specific hosts. The tactical reasoning for this is obvious when you consider that machines on your network will absolutely be executing virus code and attempting to "phone home". The industry standard antivirus solutions may work as advertised in many cases, but these solutions are naive, and there is an infinite degree of difference in the relative effectiveness of a solution that certainly always works, versus one which seems to work most of the time as long as patches and antivirus definitions are up-to-date -- especially considering the fact that a properly-crafted virus is never detected.
Ah, let's not get off-track. Patches should remain highly important and necessary until a future time when AI is able to generate perfect software.
The immediate issue preventing me from using my off-the-shelf SonicWall in a deny-by-default configuration is that it doesn't work when you specify the allowed remote hosts by their FQDNs instead of their IP addresses, but neither approach is practical because everything important is hosted on cloud networks, and these IP addresses are changing every 20 seconds or so, and every machine on my network is getting a different IP for the same FQDN, including the firewall itself. So, I wrote a DNS proxy in Perl which is positioned in front of Unbound, to minimize the dynamics of the cloud neworks' load balancing, aggregate definitive IP tables for groups of FQDNs, and instantaneously push the IP table updates to the firewall so that the default rule never applies to connections which should be allowed. Additionally, I implemented a DNS firewall in the same process, which was a natural thing to do because it already had the scope and position in the network topology to handle this role too.
The deficiency of the SonicWall's (and probably most other firewalls') internal routines presents problems today for just about any rule that hinges on FQDNs -- it is only obvious that the problem exists when users make it obvious because they can't get on their websites. I think that user complaints can scare sysadmins more than breeches do, which makes it seem natural that the industry may have avoided pursuing a difficult solution to this difficult problem.
Something about my particular implementation. At the present, my perl DNS proxy downloads its FQDN-groupings from the SonicWall via SSH upon startup, but if I shift to pf, I would either store the configuration in a .conf or an sqlite db. The process is multi-threaded and also hosts an https web config for remote administration. In the future it will support failover and high availability.
I'm posting here because I spent a little over 4 years working on a firewall problem, and I ultimately found a solution which relies on FreeBSD's PF firewall service as the engine. Now the funny thing is that I haven't actually implemented the solution with PF yet, but that is only because the final steps seem trivial and my immediate need to work on it went away. Therefore the final solution is not empirically proven, but I feel safe in taking it for granted because I choose to believe that PF supports "IP tables" in the way that claims to in the documentation. My development testing was done using a SonicWall TZ300, which worked great under low load but was prone to glitching under higher load because its internal DNS cache implementation was not well-integrated with its "IP tables" implementation.
Four years of concentrated effort is a lot for me. I had three generations of experimental code until I fully understood the problem and the essential solution was clear. Someone else with more pre-knowledge of the DNS would have got it quicker, but I feel that the technique needs to be considered widely. I did a fair share of research on forums looking for solutions before I committed to four years of development, but I was not able to even find anyone asking the same questions -- which is usually a good sign that your thinking is off-track, but in my estimation I was on the right track whereas the sysadmins whose threads I had searched were just not concerned enough with the problem.
The problem is this- technology has unpatched security vulnerabilities at every software and hardware level, and the only real defenses are isolation, segmentation, and deny-by-default. When the internet was simpler, it was always possible, and practical in many cases, while necessary only in some, to configure your firewall to deny all outbound connections by default, excepting specific hosts. The tactical reasoning for this is obvious when you consider that machines on your network will absolutely be executing virus code and attempting to "phone home". The industry standard antivirus solutions may work as advertised in many cases, but these solutions are naive, and there is an infinite degree of difference in the relative effectiveness of a solution that certainly always works, versus one which seems to work most of the time as long as patches and antivirus definitions are up-to-date -- especially considering the fact that a properly-crafted virus is never detected.
Ah, let's not get off-track. Patches should remain highly important and necessary until a future time when AI is able to generate perfect software.
The immediate issue preventing me from using my off-the-shelf SonicWall in a deny-by-default configuration is that it doesn't work when you specify the allowed remote hosts by their FQDNs instead of their IP addresses, but neither approach is practical because everything important is hosted on cloud networks, and these IP addresses are changing every 20 seconds or so, and every machine on my network is getting a different IP for the same FQDN, including the firewall itself. So, I wrote a DNS proxy in Perl which is positioned in front of Unbound, to minimize the dynamics of the cloud neworks' load balancing, aggregate definitive IP tables for groups of FQDNs, and instantaneously push the IP table updates to the firewall so that the default rule never applies to connections which should be allowed. Additionally, I implemented a DNS firewall in the same process, which was a natural thing to do because it already had the scope and position in the network topology to handle this role too.
The deficiency of the SonicWall's (and probably most other firewalls') internal routines presents problems today for just about any rule that hinges on FQDNs -- it is only obvious that the problem exists when users make it obvious because they can't get on their websites. I think that user complaints can scare sysadmins more than breeches do, which makes it seem natural that the industry may have avoided pursuing a difficult solution to this difficult problem.
Something about my particular implementation. At the present, my perl DNS proxy downloads its FQDN-groupings from the SonicWall via SSH upon startup, but if I shift to pf, I would either store the configuration in a .conf or an sqlite db. The process is multi-threaded and also hosts an https web config for remote administration. In the future it will support failover and high availability.
Last edited: