Firewall gateway on the server itself?

Hi all,

The classic topology is to put a firewall router on a separate box. However, I would like to get some technical insights whether or why would it be a bad idea to place a network firewall on the same physical machine with an intranet server.

Given a server with three NICs, this would be the target setup:
  • NIC 1: "raw" internet in, "external" IP, connects to an ADSL modem or similar
  • NIC 2: filtered traffic out, internal ip, internal default gateway, connects directly to the intranet switch
  • NIC 3: server's internal NIC, internal IP, connects to the intranet switch
Please try to figure out real weak points, if any, not just stating that this is a no-go setup. In my opinion it comes up to the proper configuration. What do you think?

Thanks in advance!
 
Ideally you'd split up functionality to separate machines but there's nothing wrong with putting everything on one box if that's all you have.

The only real downside is when they hack the box. Then they'll have the keys to your entire network.
 
Yes, hacking the box is indeed a downside. However, if someone manages to hack the firewall, it is just a matter of time to hack the server too - given the fact, that they both run FreeBSD and the entry point wasn't a bug or misconfiguration of the firewall software itself.

But yes, chances are that the server resists the attack even without the protection of the firewall.
 
A firewall only needs two interfaces, or three if you want a DMZ. It's not clear what the last interface in that list above is supposed to be doing, but it's probably not necessary.
 
I wouldn't go there personally.

For several reasons:
  • Application servers require more installed software than basic firewalls (which can get by with zero ports installed). More software means more to exploit, and also more to patch.
  • Increased urgency to patch due to the machine facing the outside which will incur more regular internal service downtime (services run on the same box).
  • Every time you patch the box, there is an increased chance something will break due to more complex software dependencies, further incurring extended downtime.
  • Performance - a firewall will process every packet hitting the box.
  • DDOS on your firewall will impact productivity apps hosted on the same box.

IMHO, the "savings" due to running one less machine are more than outweighed by the additional complexity and risk.

In short: keeping your firewall as simple as possible reduces the chance of unintentional exposure due to configuration error and makes patching far less complex and less risky.

If you can make your external firewall a different OS (to prevent the same exploit being able to compromise both your edge and your inside hosts), even better.
 
wblock@ said:
A firewall only needs two interfaces, or three if you want a DMZ. It's not clear what the last interface in that list above is supposed to be doing, but it's probably not necessary.

Yes, sure. With the third one I wanted to stress we want to "merge" two machines into one. In a separated scenario the third interface would physically belong to the server.

By the way, in this situation it wouldn't be a bad idea to use three NICs indeed. In my opinion the this would make the firewall configuration more human readable and less error prone.

throAU said:
I wouldn't go there personally.

For several reasons:
Your input is good and these are actually the points I wanted to discuss with you guys. Here is some comment on them:

throAU said:
  • Application servers require more installed software than basic firewalls (which can get by with zero ports installed). More software means more to exploit, and also more to patch.
Yes and no. Despite running more software, for the outside world there is only one listening on the open ports - the firewall software, like with every firewall-only device. All the rest consume CPU and RAM but are non-existent until after the machine is hacked. So, more to exploit, doesn't really fit - there is always only the firewall to exploit. More to patch is also not true - a security hole or other updates HAVE to be installed. IMHO this is more straightforward on one machine than on two.

throAU said:
  • Increased urgency to patch due to the machine facing the outside which will incur more regular internal service downtime (services run on the same box).
This is absolutely true.

throAU said:
  • Every time you patch the box, there is an increased chance something will break due to more complex software dependencies, further incurring extended downtime.
Maybe... If something breaks, being it the firewall only, you have the downtime anyway, unless you let the server running and accessible by users.

throAU said:
  • Performance - a firewall will process every packet hitting the box.
Performance is actually an advantage of running one machine only. This way one could better scale the machine by adding RAM or CPUs (or replacing it all-together) than trying to balance two boxes, maybe with fluctuating loads and demands.

throAU said:
  • DDOS on your firewall will impact productivity apps hosted on the same box.
Yes, this is true. But you could monitor the load and "pull the plug" before critical levels are reached. Here I mean blocking all incoming connections unconditionally, which of course means that the DDoS was successful, but the server could be further used. With a separated firewall it is actually the same when a DDoS outperforms your hardware and network capacity. Or am I completely wrong?

throAU said:
IMHO, the "savings" due to running one less machine are more than outweighed by the additional complexity and risk.

In short: keeping your firewall as simple as possible reduces the chance of unintentional exposure due to configuration error and makes patching far less complex and less risky.

If you can make your external firewall a different OS (to prevent the same exploit being able to compromise both your edge and your inside hosts), even better.

My intention is by no means a saving of hardware. We would run both the server and the firewall under FreeBSD, so if something needs to be updated, it would in fact be better to fix one machine only instead of both (especially when you have several/dozens/hundreds of these combos). As for simplicity, IMHO, fixing one machine instead of two is less error prone (you have one running OS only) and most of the time faster. Does anybody agree with this?
 
vanessa said:
Yes and no. Despite running more software, for the outside world there is only one listening on the open ports - the firewall software, like with every firewall-only device.

I believe @throAU was referencing applications with "ports" and not network ports. Fewer installed applications reduces the possibility for exploits.

vanessa said:
Performance is actually an advantage of running one machine only. This way one could better scale the machine by adding RAM or CPUs (or replacing it all-together) than trying to balance two boxes, maybe with fluctuating loads and demands.

The performance requirements of a stand-alone firewall/NAT are minimal. Many people use a minimalist OS like pfSense running on embedded hardware like a Soekris appliance. This takes up very little space and not much power. Any server you run is likely to have very different requirements depending on your usage.
 
Last edited by a moderator:
Even if the additional software is not listening on a socket, your threat surface is increased. Why? Because a combination of exploits may be used to own the box.

For example: say an exploit in an internet facing daemon enables the attacker to get a non-privileged shell on the box. He can from there, maybe exploit the additional application software installed on the machine to gain a root shell.

Is it likely? Maybe, maybe not. But is is most definitely an increased level of risk. Especially if you have a C compiler and associated development/compilation resources on the edge firewall - as a local exploit could very quickly turn into an attacker compiling a rootkit on the machine at 2am while you're asleep (for example) to properly own the machine.

And yeah, my comments above regarding downtime, etc. assume that if you break your firewall machine, you can still actually get work done on the application server, just without internet access (until the firewall is fixed).

edit:
Yes, my reference to "ports" was software from the FreeBSD ports collection. A firewall can get by without any of them and is a much smaller subset of software to audit/patch and secure on an urgent basis. If a local privilege escalation exploit is found in your application software, it is a LOT less urgent to fix if it isn't running on your edge firewall, where it could potentially be combined with a remote non-privileged shell exploit.

By DDoS, I mean that a remote DDoS could consume CPU on your machine to process/block incoming traffic. Sure, in both cases your network connection may be unusable, but if the productivity applications are running on the same host they will be starved for resources as well. In this case I am assuming your "users" are internal, on the protected LAN, and not accessing the services from outside the firewall.
 
Thanks @throAU! I definitely have to think about more hacking scenarios. But your example is pretty good.

Of course I got what you meant with ports. In my understanding a FreeBSD software port doesn't do any harm if it doesn't listen on a network socket. However, after your example with the compiler, I must admit, that it does make a difference if more software is installed and available, once an attacker breaks the system.

throAU said:
And yeah, my comments above regarding downtime, etc. assume that if you break your firewall machine, you can still actually get work done on the application server, just without internet access (until the firewall is fixed).
Theoretically, yes. But in our case the application server is a mail server as well. So if emails don't drop in every couple of minutes, the hell breaks down on Earth. It is like the phone line is dead. So application server without internet is equal to server down status, at least for us. Users of course could keep on writing documents, but we'll have to fix the connection in minutes anyway.

throAU said:
By DDOS, I mean that a remote DDOS could consume CPU on your machine to process/block incoming traffic. Sure, in both cases your network connection may be unusable, but if the productivity applications are running on the same host they will be starved for resources as well. In this case I am assuming your "users" are internal, on the protected LAN, and not accessing the services from outside the firewall.

Sure, however, blocking connections unconditionally and without filtering puts much less stress on the system and keeps these levels regardless of the scale of the DDoS attack.

So, to sum it up, by running a firewall on a server:
  • there is nothing fundamentally wrong (@SirDice), but
  • the thread surface is definitely increased (@throAU)

I'll leave the thread open, just in case someone discovers more pros or contras.
 
Last edited by a moderator:
Back
Top