IPFW Using IPFW to NAT a jail inside a VM == Slow network connectivity inside jail

m0nkey_

New Member

Reaction score: 5
Messages: 12

I've been pulling my hair out over this for days! I have a VM, jails on a loopback interface and using IPFW to NAT the traffic. My findings show that it slows to a crawl. I've also tested with PF and it works like a charm. Network speeds within the jail are fine.

I've tested this on Vultr, Digitial Ocean and even within Hyper-V and got the exact same result. Slow network speed when using IPFW, but fine with PF.

I also found a thread from July 2016 describing the similar issue, but with no responses.

/etc/rc.conf:
Code:
cloned_interfaces="lo1"
firewall_enable="YES"
firewall_nat_enable="YES"
firewall_script="/usr/local/etc/ipfw.rules"
/usr/local/etc/ipfw.rules:
Code:
#!/bin/sh

WAN_IF="hn0"
WAN_IP="a.b.c.d"
NAT_IP="127.0.1.0/24"

IPF="ipfw -q add"
NAT="ipfw -q nat"

/sbin/ipfw -q -f flush

$NAT 1000 config ip ${WAN_IP} \
        redirect_port tcp 127.0.1.0:5432 5432

$IPF 2000 allow ip from ${NAT_IP} to ${NAT_IP}
$IPF 2001 nat 1000 ip from ${NAT_IP} to any via ${WAN_IF}
$IPF 2002 nat 1000 ip from any to ${WAN_IP}

$IPF 5000 allow all from any to any via ${WAN_IF}
And I've done something similar with PF:

/etc/rc.conf:
Code:
cloned_interfaces="lo1"
pf_enable="YES"
/etc/pf.conf:
Code:
ext_if="hn0"
ext_addr=$ext_if:0
int_if="lo1"
jail_net = "127.0.1.0/24"

nat on $ext_if from $jail_net to any -> $ext_addr port 1024:65535 static-port

rdr pass on $ext_if inet proto tcp to port 5432 -> 127.0.1.1
Am I doing something wrong? Why do I get such a performance hit with IPFW while PF is perfectly fine?
 
OP
OP
m0nkey_

m0nkey_

New Member

Reaction score: 5
Messages: 12

I've just done some iperf3 tests to confirm latency when using IPFW vs PF.

iperf commands used:
Code:
# iperf3 -c iperf.he.net
# iperf3 -c iperf.he.net -R
With PF:
Code:
forward:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  15.0 MBytes  12.6 Mbits/sec   42             sender
[  5]   0.00-10.00  sec  14.7 MBytes  12.4 Mbits/sec                  receiver

reverse:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  31.7 MBytes  26.6 Mbits/sec  127             sender
[  5]   0.00-10.00  sec  29.9 MBytes  25.1 Mbits/sec                  receiver
With IPFW:
Code:
forward:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.01  sec   198 KBytes   162 Kbits/sec   79             sender
[  5]   0.00-10.01  sec   165 KBytes   135 Kbits/sec                  receiver

reverse:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   269 KBytes   220 Kbits/sec   60             sender
[  5]   0.00-10.00  sec   206 KBytes   169 Kbits/sec                  receiver
 

robroy

Active Member

Reaction score: 104
Messages: 162

m0nkey_, are you using the acceleration features of your Ethernet driver (they're generally switched on by default)?

While using in-kernel NAT with IPFW, I found that I had to disable one or more of the offload features before it'd work smoothly.
I have a vague memory of having read that the libalias-based NAT used with IPFW gets in fist fights with those offloads sometimes.

So here's what my rc.conf lines for my NICs look like now:

Code:
ifconfig_igb0="inet 192.168.32.48 netmask 255.255.255.0 -tso -rxcsum -txcsum -vlanmtu -vlanhwtag -vlanhwtso -vlanhwcsum"
I actually suspect that -tso -rxcsum -txcsum may be the only offloads that I really needed to disable, but I turned 'em all off long ago just to get NAT performance working well, and never looked back.

The symptom before disabling the offload features was that ssh sessions in to the computer were "choppy," with annoying delays in between keystrokes. After disabling the offloads, performance became roughly normal.

So if you feel like it, perhaps try disabling all of the offloads (there may be more/different ones present for you, depending on the Ethernet controller driver you're using, I guess), and see if that helps.

Joy to you m0nkey_.
 
OP
OP
m0nkey_

m0nkey_

New Member

Reaction score: 5
Messages: 12

Turning off TSO seems to have done the trick! :) Massive improvement!

Setting the sysctl net.inet.tcp.tso to 0 caused it to speed up.
Code:
root@jailhouse:~ # sysctl net.inet.tcp.tso=0
net.inet.tcp.tso: 1 -> 0
And in my jail:
Code:
root@postgresql:~ # iperf3 -c iperf.he.net
Connecting to host iperf.he.net, port 5201
[  5] local 127.0.1.1 port 54265 connected to 216.218.227.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   292 KBytes  2.39 Mbits/sec    0   28.4 KBytes
[  5]   1.00-2.00   sec   479 KBytes  3.93 Mbits/sec    0   44.1 KBytes
[  5]   2.00-3.00   sec   664 KBytes  5.44 Mbits/sec    0   59.8 KBytes
[  5]   3.00-4.00   sec   852 KBytes  6.98 Mbits/sec    0   74.0 KBytes
[  5]   4.00-5.00   sec  1.03 MBytes  8.64 Mbits/sec    0   89.7 KBytes
[  5]   5.00-6.00   sec  1.23 MBytes  10.3 Mbits/sec    0    104 KBytes
[  5]   6.00-7.00   sec  1.39 MBytes  11.6 Mbits/sec    0    118 KBytes
[  5]   7.00-8.00   sec  1.58 MBytes  13.2 Mbits/sec    0    134 KBytes
[  5]   8.00-9.00   sec  1.75 MBytes  14.7 Mbits/sec    0    148 KBytes
[  5]   9.00-10.00  sec  1.89 MBytes  15.9 Mbits/sec    0    162 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.1 MBytes  9.31 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  10.8 MBytes  9.06 Mbits/sec                  receiver
 
Top