OpenVPN network remmaping

Hi,
I've question regarding the OpenVPN.

I have FreeBSD 13 running OpenVPN. The FreeBSD server has tun0 (10.8.0.0/24) and geneth0 (192.168.1.0/24 -"server network"). On the client (with subnet 192.168.1.0/24 - no relation to "server network"), I'm connecting to the OpenVPN. Now I want connect from the client through OpenVPN to some host on "server network" (192.168.1.0/24). What I want is to remap server's network 192.168.1.0/24 to 10.8.1.0/24. On debian this was done using this:
Code:
# Remap our network 1:1 to a different IP space. Update openvpn.conf file too
# push "route 10.8.1.0 255.255.0.0"
iptables -w -t nat -A PREROUTING -i $WLAN -d 10.8.1.0/24 -j NETMAP --to 192.168.1.0/24
iptables -w -t nat -A POSTROUTING -o vmbr0 -j MASQUERADE

Is something like this possible also using FreeBSD? I'm using PF and try to used NAT there, but it didn't worked.

Appreciate any help.
 
You don't need NAT at all if you have control over the routing. Just make sure the traffic for the VPN clients is correctly routed to the VPN server. NAT is a kludge you use if you cannot control the routing.

NB What they call "masquerading" on Linux is just a form of NAT.
 
You don't need NAT at all if you have control over the routing. Just make sure the traffic for the VPN clients is correctly routed to the VPN server. NAT is a kludge you use if you cannot control the routing.

NB What they call "masquerading" on Linux is just a form of NAT.
Thank you for the reply.

Could you post some examples, please? Because I can't get my head around 192.168.1.0/24 remapping to 10.8.1.0/24.
 
He need to use 1:1 nat because of the conflicting networks, both client side network is 192.168.1.0/24 and the server side network is 192.168.1.0/24. It's much easy if you can change your server subnet to none conflicting network and use plain routing as SirDice suggest.

If you use pf as your firewall then read about binat
 
He need to use 1:1 nat because of the conflicting networks, both client side network is 192.168.1.0/24 and the server side network is 192.168.1.0/24. It's much easy if you can change your server subnet to none conflicting network and use plain routing as SirDice suggest.

If you use pf as your firewall then read about binat
Thanks for the reply.

Unfortunately I cannot change the network.

I'v tried the binat but with no luck. Probably I have something wrong. Here are the pf rules:
Code:
ext_if="genet0"
vpn_if="tun0"

## Skip loop back interface - Skip all PF processing on interface ##
set skip on lo

## Sets the interface for which PF should gather statistics such as bytes in/out and packets passed/blocked ##
set loginterface $ext_if
set loginterface $vpn_if

# Deal with attacks based on incorrect handling of packet fragments
scrub in all

# NAT 10.8.1.0/24 <-> 192.168.1.0/24
binat on $vpn_if inet from 192.168.1.0/24 to any -> 10.8.1.0/24

pass log quick all
Trying to ping 10.8.1.200 and using tcpdump on pflog0 I've got this:
Code:
 00:00:00.335749 rule 0/0(match): pass in on tun0: 10.8.0.6 > 192.168.1.200: ICM
P echo request, id 1, seq 251, length 40
 00:00:00.000037 rule 0/0(match): pass out on genet0: 10.8.0.6 > 192.168.1.200:
ICMP echo request, id 1, seq 251, length 4
But there is no traffic on genet0 while using tcpdump on it.
 
Why did you get this translation "10.8.0.6 > 192.168.1.200" did you reload your pf after making the changes to the binat?

Edit:
ohh it's from pflog0 after the translation. Use tcpdump -i tun0 to see the actual request before the translation it should look like 10.8.0.6 > 10.8.1.200
Make sure that 192.168.1.200 have a route to 10.8.0.0/24 via your OpenVPN(192.168.1.1) server or as default gateway so it can respond back to this ping and also your forwarding (gateway_enable="YES") in /etc/rc.conf
 
Thank you for the reply.

Using tcpdump -i tun0 there is request:
Code:
2021-08-07 10:39:24.668721 AF IPv4 (2), length 64: 10.8.0.6 > 10.8.1.200: ICMP echo request, id 1, seq 263, length 40
Make sure that 192.168.1.200 have a route to 10.8.0.0/24 via your OpenVPN(192.168.1.1) server or as default gateway so it can respond back to this ping and also your forwarding (gateway_enable="YES") in /etc/rc.conf
I think I don't understand this. I was under impression that NAT would make the translation of the IP addresses on tun0 boundary? Anyway OpenVPN (192.168.1.4) isn't the default gateway for the 192.168.1.0/24. I have set "gateway_enable="YES"" in /etc/rc.conf.
 
In this type of 1:1 NAT (also known as binat) you are translating entire range of 10.8.1.0/24 (intermediate network) to 192.168.1.0/24 (Local Network) in order to distinguishes between client subnet and server subnet which are overlapping. So the actual request that arrive on your server is from 10.8.0.0/24 network (10.8.0.6 host) that's why the server at 192.168.1.200 need to have route for 10.8.0.0/24 pointing to your OpenVPN server (192.168.1.4) to be able to response back to 10.8.0.6

Depending of the network topology if your OpenVPN server is on the same host which act as default gateway for the network then you don't have to make anything.
If your OpenVPN server is a separate host and it's not default gateway for your network then you have to advertise that route for all hosts via routing protocol, DHCP or static (manual) route to all server which will be accessed.
Other option is to have a static route on your default gateway pointing to 192.168.1.4 (OpenVPN server) for 10.8.0.0/24 network but then all traffic will have additional hop to your default gateway and then back to your OpenVPN server which will cause your uplink to get saturated.

If you have only one server which need to be accessed via the OpenVPN then your other option is to use NAT overload (Normal Full Cone NAT) or PAT (port address translation) and translate all request to 192.168.1.4 in this way all request will be seen by your server 192.168.1.200 as they are coming from 192.168.1.4 host and you will not need to create any additional routes on the server.

example_network.png


When the OpenVPN client 10.8.0.6 ping 10.8.1.200 the ECHO request is routed via 10.8.0.5 OpenVPN server then translated 1:1 binat to 192.168.1.200 server.
The server 192.168.1.200 receive the echo request from 10.8.0.6 then check it's routing table for 10.8.0.6 and match it for 10.8.0.0/24 route via 192.168.1.4 and respond back to ping request with echo reply.
The OpenVPN server translate the ECHO reply from 192.168.1.200 -> 10.8.1.200 via 1:1 binat and forward the packet back to the client 10.8.0.6.
 
Thank you very much for your detailed answer and for your time.

I'm sorry for the late reply, but haven't had time to try your suggestion.

So I've tried at first (as a proof of concept) setting the static route to 10.8.0.0/24 via 192.168.1.4 on 192.168.1.200. But the ping still timeout from OVPN client to 10.8.1.200. When trying to ping 10.8.0.1 from 192.168.1.200, it's working so route is OK. But the problem is that on OVPN server (192.168.1.4), there is no traffic on genet0, when using tpcdump. My pf rules didn't changed (as in my previous post). I've also strip down the pf ruleset to minimum (pass in/out all) but still same.

Anyway, I'm interested how does it then work on linux while using iptables rules showed in previous post? Because I didn't add any static routes on the gateway (or anyplace) and it is working right.

And one more question, what SW are you using for the network diagram? Because I wanted to draw something like you've done for better explantion but have not found something with the desired output produced.
 
Thank you for the reply,
here is the output:
Bash:
root@rpi4:~ # sysctl net.inet.ip.forwarding
net.inet.ip.forwarding: 1
root@rpi4:~ # pfctl -sr
scrub in all fragment reassemble
pass in log quick all flags S/SA keep state
pass out log quick all flags S/SA keep state
root@rpi4:~ # pfctl -sn
binat on tun0 inet from 192.168.1.0/24 to any -> 10.8.1.0/24
 
It looks good for me. When you try to ping 10.8.1.4 from the OpenVPN client did you get respond while monitoring it with tcpdump -ni tun0 icmp
And when you try to ping 10.8.1.200 from the OpenVPN client did you get any packets going out on your internal interface?
 
It looks good for me. When you try to ping 10.8.1.4 from the OpenVPN client did you get respond while monitoring it with tcpdump -ni tun0 icmp
Yes, the ping does work and there is ping echo in tcpdump log.

And when you try to ping 10.8.1.200 from the OpenVPN client did you get any packets going out on your internal interface?
When trying to ping 10.8.1.200 ICMP request is only shown on tun0, pflog0. On genet0 (192.168.1.0/24) there is nothing.
 
Then there's an issue with the genet0 driver as it doesn't forward the packets. You can try the same setup on another hardware (x86/amd64) to make sure that your configuration is ok or try to test it on FreeBSD current/13
 
you may need iroute 192.168.1.0/24 so the openvpn server knows which client to send packets with this dest
but should not matter if you ping from the box with vpnclient
 
192.168.1.0/24 is already in the routing table on the OpenVPN server as it's connected via his interface genet0 unless his netmask is messed up. Anyway good point. Can you show the output of ifconfig and netstat -rn4
 
Then there's an issue with the genet0 driver as it doesn't forward the packets. You can try the same setup on another hardware (x86/amd64) to make sure that your configuration is ok or try to test it on FreeBSD current/13
This could be the issue as it is RPi4. I'll try to upgrade to most recent FreeBSD13.
192.168.1.0/24 is already in the routing table on the OpenVPN server as it's connected via his interface genet0 unless his netmask is messed up. Anyway good point. Can you show the output of ifconfig and netstat -rn4
Bash:
root@rpi4:~ # ifconfig
genet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=68000b<RXCSUM,TXCSUM,VLAN_MTU,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether dc:a6:32:78:b0:ad
        inet 192.168.1.4 netmask 0xffffff00 broadcast 192.168.1.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
pflog0: flags=141<UP,RUNNING,PROMISC> metric 0 mtu 33160
        groups: pflog
tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1500
        options=80000<LINKSTATE>
        inet 10.8.0.1 --> 10.8.0.2 netmask 0xffffffff
        groups: tun
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        Opened by PID 1071
root@rpi4:~ # netstat -rn4
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            192.168.1.1        UGS      genet0
10.8.0.0/24        10.8.0.2           UGS        tun0
10.8.0.1           link#4             UHS         lo0
10.8.0.2           link#4             UH         tun0
127.0.0.1          link#2             UH          lo0
192.168.1.0/24     link#1             U        genet0
192.168.1.4        link#1             UHS         lo0
 
Thank you for the replies.
As I don't have a time for trying current/13 now I'll wait for the new release and try it then.
Regarding the NAT you mean something like this:
Code:
nat on $ext_if from 192.168.1.200 to any -> 10.8.0.1
 
Back
Top