PF [Still Unsolved] Redirect port from VPS to home server without using NAT

sadaszewski

Member

Reaction score: 16
Messages: 46

HTTP(S) is just an example, I have many other services that wouldn't be able to communicate the real IP via a proxy, therefore PF solution is a must.

I am referring to RDR and NAT as they are understood in the context of pf.conf (i.e. I mean the RDR and NAT statements).


Both my VPS and Home Server (HS) run FreeBSD. The two are connected using the simplest point-to-point OpenVPN setup. The VPS has address 10.8.0.1 and the home server - 10.8.0.2 in this VPN. I would like to redirect HTTP(S) ports from VPS to the home server without using NAT in order not to lose the source IPs.

The following NAT-based setup works:

VPS /etc/pf.conf
Code:
rdr on vtnet0 inet proto tcp from any to (vtnet0) port { 80, 443 } -> 10.8.0.2
nat on tun0 from any to 10.8.0.2 port { 80, 443 } -> (tun0)

HS /etc/pf.conf
Code:
rdr pass on tun0 proto tcp from any to 10.8.0.2 port 80 -> 127.0.11.1
rdr pass on tun0 proto tcp from any to 10.8.0.2 port 443 -> 127.0.13.1

127.0.11.1 and 127.0.13.1 are respectively IPs of HTTP/HTTPS jails running on lo1.

The following non-NAT setup results in very slow transfers and ICMP unreachable errors:

VPS /etc/pf.conf
Code:
rdr on vtnet0 proto tcp from any to (vtnet0) port { 80, 443 } -> 10.8.0.2

HS /etc/pf.conf
Code:
rdr on tun0 proto tcp from any to 10.8.0.2 port 80 -> 127.0.11.1
rdr on tun0 proto tcp from any to 10.8.0.2 port 443 -> 127.0.13.1
pass in on tun0 reply-to (tun0 10.8.0.1) proto tcp from any to 127.0.11.1 port 80
pass in on tun0 reply-to (tun0 10.8.0.1) proto tcp from any to 127.0.13.1 port 443

The ICMP errors I am getting on the HS (on the VPS side all looks good but SLOW):
Code:
12:57:55.578405 IP localhost > 10.8.0.2: ICMP another.external.ip.from.outside.both.the.VPN.and.HS.networks unreachable - need to frag (mtu 1500), length 60

Does anyone have experience in this type of non-NAT redirects from VPS to HS? Thank you in advance.
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

Hi, thanks for the suggestion, it crossed my mind as well. However, I have similar situation for many other services which do not have the ability to communicate the real IP within the proxy protocol, therefore I would appreciate a PF solution or a hint why this might not be possible with PF. It sounds like it should be a basic thing to do though ...
 

Zirias

Son of Beastie

Reaction score: 1,553
Messages: 2,673

To get that straight: an rdr rule is (static) NAT. You're rewriting network addresses of IP packets.

I wonder why you want to do that at all? What's the gain, compared to just having the clients connect to your home server directly?
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

To get that straight: an rdr rule is (static) NAT. You're rewriting network addresses of IP packets.

I wonder why you want to do that at all? What's the gain, compared to just having the clients connect to your home server directly?
Thanks for the feedback. I was referring to RDR and NAT here as they are understood in the context of pf.conf (i.e. I mean the statements in pf.conf).

The advantage is such that the VPS has a static IP address and I need it to be the case for SMTP and other IP reputation based services.
 

Zirias

Son of Beastie

Reaction score: 1,553
Messages: 2,673

The advantage is such that the VPS has a static IP address and I need it to be the case for SMTP and other IP reputation based services.
For SMTP, use a "smart host" or "gateway". That's what I'm doing on my VPS, it's the primary MX for my domain and has an MTA (in my case exim) installed, delegating everything to my home server.

For everything else, have a look at nsupdate(1) ;) – most of the time, you don't need a static IPv4 address, you just need a hostname that's known in DNS :cool:
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

For SMTP, use a "smart host" or "gateway". That's what I'm doing on my VPS, it's the primary MX for my domain and has an MTA (in my case exim) installed, delegating everything to my home server.

For everything else, have a look at nsupdate(1) ;) – most of the time, you don't need a static IPv4 address, you just need a hostname that's known in DNS :cool:
Thank you for the reminder, I am aware of the alternatives. While it could be interesting to listen to my reasons for choosing a PF solution (if possible) and opting for a static IP address, I think it is beyond the scope of this thread. It seems like a simple packet juggling exercise, I would hope PF is capable of this. route-to / reply-to certainly look promising... what am I missing?
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

Let me write down how I understand what should be happening, maybe this can help in the thought process.

1. The packet arrives from a third party IP (third.x.y.z) at the VPS external network interface vtnet0 targetting its IP address (vps.x.y.z) and port 443.
2. There, the VPS rdr rule is triggered and the destination IP address is changed from vps.x.y.z to 10.8.0.2.
3. The IP4 forwarding is enabled (not sure if it is even necessary as far as virtual network interfaces are involved) and thus the system figures out the route to 10.8.0.2 and determines that the packet should go out of the tun0 interface
4. The packet goes via VPS's tun0 and ends up on HS's tun0
5. The HS's rdr rule is triggered and rewrites the destination address to 127.0.13.1.
6. The HS's pass rule is triggered and designates any reply packets to go via (tun0 10.8.0.1), that is via interface tun0 and the first hop being the VPS's VPN IP address 10.8.0.1.
7. IP4 forwarding is enabled on HS as well (also don't think it is even necessary), and thus the system figures out the route to 127.0.13.1 (that is interface lo1) and "sends" the packet via that interface.
8. The jail receives the packet and "queues" a reply 127.0.13.1 -> third.x.y.z on lo1
9. The HS's PF figures out that this is the reply in a stateful communication (points 5, 6) and that both the source address needs to be changed to 10.8.0.2 and the packet should be sent out via interface tun0 with first hop 10.8.0.1.
10. The packet arrives at the VPS's tun0 where it is detected as part of the stateful communication originating in point (2) above and therefore its source IP address is changed to vps.x.y.z.
11. IP4 forwarding kicks in again and the packet is send out back via vtnet0 with the correct source and destination address.

Voila. Hope it helps us think about it.
 

Eric A. Borisch

Aspiring Daemon

Reaction score: 359
Messages: 586

Do you have different MTUs? Just curious about the “need to frag” statement in your initial post.
 

covacat

Well-Known Member

Reaction score: 223
Messages: 467

i suck at pf but for an equivalent ipfw scenario am i'm pretty sure you need a fwd (route-to ?) on HS for packets coming from the jails back to the client ipv4. otherwise they will go thru hs' default route
if the HS also takes requests directly from clients (not only via the vps') then it complicates the things a bit
also their is some complexity added by the way ovpn works
if the vps's dial in to hs (hs is the ovpn server) then you need some iroutes because the openvpn server on hs has no idea to which client to push the packets
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

Do you have different MTUs? Just curious about the “need to frag” statement in your initial post.
Hi Eric, Thank you for your reply. Indeed, there are all sorts of things that are "not right" with that error.

1) I do not see why the fragmentation would cause such a catastrophic failure in overall throughput (it is essentially non-functional).
2) My MTUs are as following - VPS's vtnet0 - 1500; VPS's tun0 - 1500; HS's tun0 - 1500; HS's lo0 and lo1 - 16384. I have tried changing MTU for the lo0 and lo1 to 1500 and 1360 without any effect.
3) The NAT-based setup in OP traverses exactly the same route and the only difference is that the original source address is nat-ed from third.x.y.z to 10.8.0.1. No ICMP need to fragment messages there.
4) The ICMP message happens on HS and is in the direction localhost > 10.8.0.2 and talks about the third.x.y.z address. HS (localhost) shouldn't even be concerned with any reachability issues of third.x.y.z as the packet is supposed to go via tun0 after all, right?
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

Hi covacat, thanks for the suggestions.
i suck at pf but for an equivalent ipfw scenario am i'm pretty sure you need a fwd (route-to ?) on HS for packets coming from the jails back to the client ipv4. otherwise they will go thru hs' default route
The reply-to part is supposed to take care of that and actually this is indeed what happens, albeit very slowly (probably stuttering due to those ICMP errors).
if the HS also takes requests directly from clients (not only via the vps') then it complicates the things a bit
Interesting but why? I can see that being a problem only if state tracking got confused somehow but why would that be the case?
also their is some complexity added by the way ovpn works
if the vps's dial in to hs (hs is the ovpn server) then you need some iroutes because the openvpn server on hs has no idea to which client to push the packets
The HS dials in to VPS in a point-to-point fashion. Also I do not see any ambiguity as I am specifying the next hop via the reply-to directive.
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

For what it's worth, I have established that the MTU from the error message follows HS's tun0's MTU (i.e. if I change HS's tun0 MTU, the error message shows the new MTU). Still no clue what is going on here and why a NAT-ed setup just works.

Also, I am leaning in the direction that it is indeed a fragmentation problem after all, based on the following pieces of evidence:
1) Small packets seem to traverse without any hiccups, e.g. a simple HTTP GET that returns a very short document works flawlessly.
2) For regular requests, after a while of hiccups, eventually the algorithm seems to settle for a smaller MTU and then communication accelerates and pages eventually finish loading very fast. So, every time there is a long while of jittery transfer with ICMP errors but once it settles for smaller transfer units, the page finishes loading rapidly.

But why would this be the case in a non-NAT setup while an almost identical NAT setup works flawlessly?
 
OP
S

sadaszewski

Member

Reaction score: 16
Messages: 46

I've stumbled upon the following article and started wondering if something similar could also be the culprit here. However, scrubbing outgoing traffic on HS's tun0 with max-mss set to 1200 didn't help. It is such an annoying mystery. Could it be that on FreeBSD not only route-to somehow does not take MTUs into account at all but the max-mss option does not work either o_O Looks like one needs to dig into the source code...

More reading: pf.c

pf-stalls-connection-when-using-route-to
 
Top