PF PF: NATing ipsec

I am experiencing a weird problem: I cannot nat ipsec. I have a very basic setup: the host is FreeBSD with a opnsense VM and a vnet based jail. The hosts em0 is connected to the internet, then there is bridge0 with network 192.168.251.0/24 and with a tap interface of the opnsense VM (simulating the "WAN" interface) - this bridge does NAT all outgoing traffic. Then I have a bridge1 which is simulating the LAN network - the opnsense LAN interface as well as the jails epair is connected. Basic setup works: the jails traffic is going through opnsense which outputs the traffic via bridge0, the host then does the NAT and everything is fine.

network.png

All outgoing traffic is NATed, I can UDP e.g. with nc or drill as well as TCP and see the everything working. BUT: as soon as I have an ipsec tunnel established between our AWS infrastructure and the opnsense the traffic to the ipsec endpoint is not NATed. With the following rules:

Code:
nat pass on em0 proto udp from 192.168.251.100 to any -> $ip_out
nat pass on em0 proto tcp from 192.168.251.100 to any -> $ip_out

the host does not NAT the packets. tcpdump on the pflog gives me:

Code:
00:00:00.000012 rule 22/0(match): block out on em0: 192.168.251.100 > 3.124.19.154: ip-proto-17
00:00:00.339727 rule 22/0(match): block out on em0: 192.168.251.100.4500 > 3.124.19.154.4500: UDP-encap: ESP(spi=0xc4f0d1ee,seq=0x22), length 1272

The connection I was trying to establish was from the jail to a VM inside our AWS infrastructure. Of course the traffic is going through the ipsec tunnel where 3.124.19.154 is the IP address of AWS endpoint. Of course the firewall is blocking the rule, cause packets with source of our internal lan should not pass out through the physical interface.

I have absolutely no idea what's wrong and I am happy about any small hint, more so about bigger enlightenments. thanks!
 
IPSec isn't "plain" TCP or UDP traffic. It's an entirely different protocol (esp, ah and ipencap).
 
I have tried that too, but rdr/nat on those protocols explicitly did not change anything. my discussion on the mailing list: https://marc.info/?t=166538516100003&r=1&w=2 - furthermore. I have setup the exact same configuration but with Linux as a host with an opnsense VM where NAT with iptables was working flawlessly.
 
just enable nat-t on your ipsec setup then pf on internet connected host sees udp traffic which can nat without any problem
strongswan, raccoon can both do nat-t
 
ok, so the tunnel gets established
its not clear to me what you are trying
do you want traffic from the jail ip to appear at the AWS end as from the opnsense IP ?
 
The bridge on the host connecting the host and the opnsense VM is configured with network 192.168.251.0/24, where .1 is the host itself and .100 is the opnsense VM. The "LAN" network of the opnsense is 192.168.1.0/24, where .1 is opnsense and .2 is the jail (it is a bridge on the host but the host has no ip).

The tunnel to AWS is established (AWS <=> opnsense VM), and I can connect from my instance on my network in AWS (which is 10.40.0.171) to 192.168.1.2 - the jail.
However, when trying to connect from the jail to 10.40.0.171 on my AWS instance things are getting weird: the host simply does not do network address translation of those packets, resulting in the pflog from above.

We can rule out problems with AWS, opnsense and jail. I cannot wrap my head around what I need to do to achieve the following: destination-nat of all udp/tcp/ipsec coming in at $public_vpn_ip and forward it to opnsense, and then source-nat of all udp/tcp/ipsec coming from opnsense translating to $public_vpn_ip. It works with udp/tcp packets but not ipsec. The host wants to send the ipsec packets coming from opnsense to the physical interface with a sender ip of 192.168.251.100 (the ip of opnsense), so the nat step is omitted.

At first I thought it was an issue with the ipsec configuration, thus I ordered another server, installed Linux and used KVM to setup the exact same config of the opnsense VM + ipsec (strongswan to be accurate; and of course the public ip address changed on all settings). A simple sysctl net.ipv4.ip_forward=1 && iptables -t nat -A POSTROUTING --source 192.168.251.100 -j SNAT --to-source $public_vpn_ip and the bidirectional VPN was running as expected (please no suggestion to just use Linux).
 
there should be no ipsec packets reaching the host they should be udp so you host does not have to deal with esp packets
the traffic between opnsense box and amazon should be udp 4500 4500
also i can't imagine how this works "I can connect from my instance on my network in AWS (which is 10.40.0.171) to 192.168.1.2" and the reverse does not
if you can ssh from 10.40.0.171 to 192.168.1.2 you already have bidirectional tunnel traffic
 
thats exactly the thing I do not understand why it does not work. Even the log from tcpdump tells me it is ip-proto-17 (UDP) and port 4500, but I have no explanation why the nat mechanism does make an exception for these packages.
 
I am experiencing a weird problem: I cannot nat ipsec. I have a very basic setup: the host is FreeBSD with a opnsense VM and a vnet based jail. The hosts em0 is connected to the internet, then there is bridge0 with network 192.168.251.0/24 and with a tap interface of the opnsense VM (simulating the "WAN" interface) - this bridge does NAT all outgoing traffic. Then I have a bridge1 which is simulating the LAN network - the opnsense LAN interface as well as the jails epair is connected. Basic setup works: the jails traffic is going through opnsense which outputs the traffic via bridge0, the host then does the NAT and everything is fine.

View attachment 14882
All outgoing traffic is NATed, I can UDP e.g. with nc or drill as well as TCP and see the everything working. BUT: as soon as I have an ipsec tunnel established between our AWS infrastructure and the opnsense the traffic to the ipsec endpoint is not NATed. With the following rules:

Code:
nat pass on em0 proto udp from 192.168.251.100 to any -> $ip_out
nat pass on em0 proto tcp from 192.168.251.100 to any -> $ip_out

the host does not NAT the packets. tcpdump on the pflog gives me:

Code:
00:00:00.000012 rule 22/0(match): block out on em0: 192.168.251.100 > 3.124.19.154: ip-proto-17
00:00:00.339727 rule 22/0(match): block out on em0: 192.168.251.100.4500 > 3.124.19.154.4500: UDP-encap: ESP(spi=0xc4f0d1ee,seq=0x22), length 1272

The connection I was trying to establish was from the jail to a VM inside our AWS infrastructure. Of course the traffic is going through the ipsec tunnel where 3.124.19.154 is the IP address of AWS endpoint. Of course the firewall is blocking the rule, cause packets with source of our internal lan should not pass out through the physical interface.

I have absolutely no idea what's wrong and I am happy about any small hint, more so about bigger enlightenments. thanks!

this may help
 
I am starting to believe that this is a driver issue of virtio because here and there I have experienced issues with virtio and vmware when trying to implement firewalls. I am getting a similar problem now with a similar setup. All I want to create is a simple VPN Server on a FreeBSD VM of a cloud provider.


tcpdump -n -e -ttt -i pflog0:
Code:
00:00:01.960322 rule 15/0(match): block in on vtnet0: MY.IP.AD.DR.58573 > SE.RV.ER.IP.1194: UDP, bad length 1444 > 1432
00:00:00.000007 rule 15/0(match): block in on vtnet0: MY.IP.AD.DR > SE.RV.ER.IP: ip-proto-17

This is weird since the MTU on vtnet0 is 1500 (and ifconfig options -rxcsum -txcsum -tso -lro), and ip-proto-17 is UDP.

First two lines in pf.conf:

Code:
pass in quick proto tcp to $ext_ip port 22
pass in quick proto udp to $ext_ip port 1194


and my last rule (nr. 15):
Code:
block log

I have only found https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240819 but this seems not so helpful. I am happy if someone could throw me an idea to debug ...[/code]
 
ok, this one was resolved. I was on the wrong path and have constantly searched for solutions regarding the external interface (because the weird log of pf is on the external interface), however, using openvpns config option "tun-mtu 1400" fixed this.
 
Back
Top