Need help with load balance/scale a network

I am using FreeBSD 10.2 and currently have a network that looks like this

Code:
Client ----  DNS/NAT/Router
It works fine, but if the network ever gets bigger, i'd like to be able to handle more traffic

How can I do multiple DNS / NAT / Router?
What tech / modules can I use?

Code:
          /   DNS/NAT/Router
         /
Client   ---  DNS/NAT/Router
         \
          \   DNS/NAT/Router

Thanks
 
i looked at carp and the other things you mentioned. but it looks like an master / slave ( active / passive ) setup.
Can it do an active / active / active setup?

I have an environment where it was passing about 7gb.
But after I put in PF for NAT, it was only do about 3gb of traffic.

Is it possible to have 3 active NAT/routers so I can support the 7gb of traffic?
 
Can it do an active / active / active setup?

I have an environment where it was passing about 7gb.
But after I put in PF for NAT, it was only do about 3gb of traffic.
As much as I like FreeBSD I would suggest looking into proper networking equipment to handle those big loads.
 
Ah, right. In that case you might want to try IPFW instead of PF. As I understood it IPFW may perform a little better than PF as it's FreeBSD's own native firewall. I'm assuming you're already using vtnet(4) interfaces? There have been some problems with TSO/LRO being enabled on virtual interfaces causing transmission errors, see if turning TSO/LRO off improves things.
 
Vtnet interfaces are for KVM right? I'm using VMWare.

I think at this point, I'm just trying to figure out how to be able to horizontally scale a NAT vs trying to squeeze as much performance as I can from 1 box.
Still looking into some of the suggested technologies like OSPF
 
The vtnet(4) interfaces are what you should be using under any hypervisor if they are available because they take away one level of hardware abstraction such as emulating the em(4) hardware which is quite costly.
 
calomel.org has a nice writeup about how they achieved nearly 10gbit with PF on a single FreeBSD box (a few years ago with FreeBSD 9.1!):
https://calomel.org/network_performance.html
Also a short article about network tuning on FreeBSD in general:
https://calomel.org/freebsd_network_tuning.html

So it shouldn't be a problem achieving the targeted throuhput with PF on a single system nowadays - at least on bare metal.
But I suspect the limiting factor in your scenario might be the virtualization layer - even with paravirtualized network hardware, the general overhead and performance penalty from virtualization might be just too high to achieve the desired throughput.
OTOH with this size of network you should eliminate the single point of failure regardless of what throughput can be achieved with one system/VM. (but would be rather pointless if you run them on the same virtualization host...)



I just provisioned a new storage system (FreeBSD 11.0-RELEASE), which is connected to a smartOS host via 10gE and 8G FC. So for the upcoming week I'm planning on some performance tests between hosts/jails/zones/VMs - mainly iSCSI vs FC performance will be of interest, but I'm also curious about networking performance and the impact of KVM overhead on both.
Both systems and ZFS pools are quite beefy, so (hopefully) there shouldn't be any hardware induced bottlenecks.
I'd be happy to share results, and as these systems are not yet in production I could carry out some tests that might be of interest to others.
 
Back
Top