IPFW TCP protocol becomes non-working for PPTP-client after ipfw nat

Hi everyone!

I need help with strange situation. There is a host (12.2) with jails, one of these jails is a vpn server (mpd5) with internal eth1=192.168.1.9 and another one is a mail server with internal eth1=192.168.1.4. The vpn jail is a vnet jail, and the mail jail is a simple jail.

Win-client (win10 or win7) is establishing pptp connection to vpn jail and obtaining ip 192.168.0.130. To give the client access to internal network I run on host “route add 192.168.0.0/24 192.168.1.9”, and at this stage all is fine, on the client side UDP (ping 192.168.1.4) and TCP (curl 192.168.1.4) protocols work without any problem.

Then I enable ipfw nat for vpn-client traffic on vpn jail (just modeling situation with nat to WAN for such clients because web conferences work badly through Squid):
Code:
ipfw -q nat 1 config if eth1 reset
#NAT for incoming traffic
ipfw -q add 00002 nat 1 ip4 from any to any in recv eth1
#loopback
ipfw add 00290 allow ip from any to any via lo0
#NAT for outgoing traffic
ipfw -q add 00800 nat 1 ip4 from any to any out xmit eth1
#open all
ipfw add 00900 allow ip from any to any

This ipfw nat configuration is a proven variant, similar configuration gives jails access to WAN at host firewall. And from this moment TCP protocol on client side becomes non-working although UDP protocol stays fine. ICMP traffic between client (192.168.0.130) and mail server (192.168.1.4) is passing in both directions but when I run on client machine “curl 192.168.1.4” – I doesn’t receive any reply from 192.168.1.4 although tcpdump on vpn server’s interfaces shows that corresponding reply tcp-packets are passing right to the client.

I suppose that for some reason pptp-tunnel (by the way, the same situation with L2TP/IPSEC ) on client side does not accept incoming tcp-packets after ipfw nat has been “digging” into them. I would be grateful for any help in solving this problem except for variant with replacement ipfw with pf.
 
This is current mpd5 configuration:
Code:
#cat ./mpd.conf
startup:
        # configure mpd users
        set user foo bar admin
        set user foo1 bar1
        # configure the console
        set console self 127.0.0.1 5005
        set console open
        # configure the web server
        set web self 0.0.0.0 5006
        set web open
#       log +all

default:
#       load l2tp_server
        load pptp_server
l2tp_server:
...
pptp_server:

# Define dynamic IP address pool.
#       set ippool add pool1_l2tp 192.168.1.100 192.168.1.199

# Create clonable bundle template named B_pptp
        create bundle template B_pptp
        set iface enable proxy-arp
        set bundle enable compression
        set iface enable tcpmssfix
#       set iface mtu 1380
        set ipcp yes vjcomp
# Specify IP address pool for dynamic assigment.
        set ipcp ranges 192.168.1.129/25 ippool pool_pptp

# Create clonable link template named P_pptp
        create link template P_pptp pptp
# Set bundle template to use
        set ccp yes mppc
        set mppc yes e40
        set mppc yes e128
        set mppc yes stateless
        set link action bundle B_pptp
        set link keep-alive 10 60
# Multilink adds some overhead, but gives full 1500 MTU.
        set link yes acfcomp protocomp
        set link enable multilink
        set link no pap chap eap
        set link enable chap-msv2
# We can use use RADIUS authentication/accounting by including
# another config section with label 'radius'.
        load radius
# We reducing link mtu to avoid GRE packet fragmentation.
        set link mtu 1380
#       set link mru 1460
# Configure PPTP
        set pptp self 10.41.2.26
# Allow to accept calls
        set link enable incoming

###########################################################
#set radius config /usr/home/nas/conf/radius.conf
set radius server 10.41.2.22 111 1812 1813
set radius retries 3
set radius timeout 3
set radius me 10.41.2.26
set auth acct-update 300
set auth enable radius-auth
set auth enable radius-acct
set radius enable message-authentic
set auth max-logins 1
set link enable peer-as-calling

radius:
# You can use radius.conf(5), its useful, because you can share the
# same config with userland-ppp and other apps.
#       set radius config /etc/radius.conf
# or specify the server directly here
        set radius server 10.41.2.22 111 1812 1813
        set radius retries 3
        set radius timeout 3
# send the given IP in the RAD_NAS_IP_ADDRESS attribute to the server.
        set radius me 10.41.2.26
# send accounting updates every 5 minutes
        set auth acct-update 300
# enable RADIUS, and fallback to mpd.secret, if RADIUS auth failed
        set auth enable radius-auth
# enable RADIUS accounting
        set auth enable radius-acct
# protect our requests with the message-authenticator
        set radius enable message-authentic

I have tried to change link mtu to 1280,1360,1400,1460 - no effect.
Ifconfig with connected client:
Code:
eth1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 00:a0:98:8b:7b:db
        hwaddr 02:23:dc:b7:64:0b
        inet 192.168.1.9 netmask 0xffffff00 broadcast 192.168.1.255
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
ng0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 0 mtu 1380
        inet 192.168.1.129 --> 192.168.0.130 netmask 0xffffffff
        inet6 fe80::2a0:98ff:fe28:7ca7%ng0 prefixlen 64 scopeid 0x4
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
 
I always put the loopback rule before the NAT rules. Also for MPD5, I need to add a rule for allowing everything through the netgraph interfaces:

ipfw -q add 10 allow ip from any to any via ng*

In addition check, whether sysctl net.inet.ip.fw.one_pass=0 makes a different.
 
Tried this ipfw conf without any effect:
Code:
ipfw disable one_pass
ipfw -q nat 1 config if eth1 reset
#loopback
ipfw -q add 00001 allow ip from any to any via lo0
ipfw -q add 00002 allow ip from any to any via ng*
#NAT for incoming traffic
ipfw -q add 00009 nat 1 ip4 from any to any in recv eth1
#ipfw -q add 00010 check-state
#NAT for outgoing traffic
ipfw -q add 00800 nat 1 ip4 from any to any out xmit eth1
#open all
ipfw add 00900 allow ip from any to any
Packets pass to client:
Code:
# tcpdump host 192.168.1.4 -i ng0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ng0, link-type NULL (BSD loopback), capture size 262144 bytes
15:18:06.866755 IP 192.168.0.130.61220 > 192.168.1.4.http: Flags [R.], seq 2889217804, ack 448433478, win 0, length 0
15:18:08.423219 IP 192.168.0.130.61235 > 192.168.1.4.http: Flags [S], seq 2182640762, win 65280, options [mss 1340,nop,wscale 8,nop,nop,sackOK], length 0
15:18:08.423342 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [S.], seq 2564401607, ack 2182640763, win 65535, options [mss 1460,nop,wscale 6,sackOK,eol], length 0
15:18:08.423749 IP 192.168.0.130.61235 > 192.168.1.4.http: Flags [.], ack 1, win 8207, length 0
15:18:08.423838 IP 192.168.0.130.61235 > 192.168.1.4.http: Flags [P.], seq 1:76, ack 1, win 8207, length 75: HTTP: GET / HTTP/1.1
15:18:08.424119 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [P.], seq 1:390, ack 76, win 1026, length 389: HTTP: HTTP/1.1 302 Found
15:18:08.738883 IP 192.168.0.130.61235 > 192.168.1.4.http: Flags [P.], seq 1:76, ack 1, win 8207, length 75: HTTP: GET / HTTP/1.1
15:18:08.738981 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [.], ack 76, win 1026, options [nop,nop,sack 1 {1:76}], length 0
15:18:09.424683 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [P.], seq 1:390, ack 76, win 1026, length 389: HTTP: HTTP/1.1 302 Found
15:18:11.624174 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [P.], seq 1:390, ack 76, win 1026, length 389: HTTP: HTTP/1.1 302 Found
15:18:13.436184 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [F.], seq 390, ack 76, win 1026, length 0
15:18:13.436940 IP 192.168.0.130.61235 > 192.168.1.4.http: Flags [.], ack 1, win 8207, length 0
15:18:15.828503 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [FP.], seq 1:390, ack 76, win 1026, length 389: HTTP: HTTP/1.1 302 Found
15:18:24.028455 IP 192.168.1.4.http > 192.168.0.130.61235: Flags [FP.], seq 1:390, ack 76, win 1026, length 389: HTTP: HTTP/1.1 302 Found
 
It looks like issue with MTU/MSS.
My mpd configuration has the set of related options:
Code:
set iface enable tcpmssfix
set ipcp no vjcomp
set link mtu 1460

Also, try to add 'allow' rules for local traffic in both directions between 192.168.0 and 192.168.1 before the nat.
Usually you should not to do NATing for locally routed traffic.

Please check the output of traceroute. In some cases local traffic leave the machine via default gateway.

Use ipfw log feature for NAT debugging. It should be enabled via sysctl.
Just add the similar rules before and after NAT and use tail -f /var/log/security for debug.
ipfw add xxx count log logamount 0 tcp from 192.168.0.0/24 to 192.168.1.0/24
or
ipfw add xxx count log logamount 0 tcp from any to any


Finally, try to rewrite your NAT rules for reducing affected traffic.
Example:
incoming: ipfw add nat 1 ip4 from any to NAT_EXT_IP in recv eth1
outgoing: ipfw add nat 1 ip4 from LOCALNET to not LOCALNETS,LOCALNETS out xmit eth1
 
Well, I tried all last recommendations but without any effect.
Last version of ipfw conf:
Code:
ipfw -f flush
ipfw -f pipe flush
ipfw -f queue flush
ipfw disable one_pass
ipfw -q nat 1 config if eth1 reset

ipfw -q add 00001 allow ip from any to any via lo0
ipfw -q add 00002 allow ip from any to any via ng*
ipfw -q add 00003 allow gre from any to any
ipfw -q add 00004 allow ip from 192.168.1.0/24 to 192.168.0.0/24
ipfw -q add 00005 allow ip from 192.168.0.0/24 to 192.168.1.9

ipfw -q add 00006 count log logamount 0 tcp from 192.168.0.0/24 to 192.168.1.0/24
ipfw -q add 00007 count log logamount 0 tcp from 192.168.1.9 to 192.168.1.4
ipfw -q add 00008 count log logamount 0 tcp from 192.168.1.4 to 192.168.1.9
ipfw -q add 00009 count log logamount 0 tcp from 192.168.1.9 to 192.168.0.130
ipfw -q add 00010 count log logamount 0 tcp from 192.168.1.4 to 192.168.0.130

ipfw -q add 00020 nat 1 ip4 from any to 192.168.1.9 in recv eth1
ipfw -q add 00800 nat 1 ip4 from 192.168.0.130 to 192.168.1.4 out xmit eth1

ipfw -q add 00801 count log logamount 0 tcp from 192.168.0.0/24 to 192.168.1.0/24
ipfw -q add 00802 count log logamount 0 tcp from 192.168.1.9 to 192.168.1.4
ipfw -q add 00803 count log logamount 0 tcp from 192.168.1.4 to 192.168.1.9
ipfw -q add 00804 count log logamount 0 tcp from 192.168.1.9 to 192.168.0.130
ipfw -q add 00805 count log logamount 0 tcp from 192.168.1.4 to 192.168.0.130

ipfw -q add 00899 allow ip from any to any
Result of logging tcp packets for executing curl 192.168.1.4 on client's machine:
Code:
Jul  8 11:02:10 jHost1 kernel: ipfw: 6 Count TCP 192.168.0.130:51187 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 802 Count TCP 192.168.1.9:55064 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 8 Count TCP 192.168.1.4:80 192.168.1.9:55064 in via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 805 Count TCP 192.168.1.4:80 192.168.0.130:51187 in via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 6 Count TCP 192.168.0.130:51187 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 802 Count TCP 192.168.1.9:55064 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 6 Count TCP 192.168.0.130:51187 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 802 Count TCP 192.168.1.9:55064 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 8 Count TCP 192.168.1.4:80 192.168.1.9:55064 in via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 805 Count TCP 192.168.1.4:80 192.168.0.130:51187 in via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 6 Count TCP 192.168.0.130:51187 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 802 Count TCP 192.168.1.9:55064 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 8 Count TCP 192.168.1.4:80 192.168.1.9:55064 in via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 805 Count TCP 192.168.1.4:80 192.168.0.130:51187 in via eth1
Jul  8 11:02:11 jHost1 kernel: ipfw: 8 Count TCP 192.168.1.4:80 192.168.1.9:55064 in via eth1
Jul  8 11:02:11 jHost1 kernel: ipfw: 805 Count TCP 192.168.1.4:80 192.168.0.130:51187 in via eth1
Jul  8 11:02:13 jHost1 kernel: ipfw: 8 Count TCP 192.168.1.4:80 192.168.1.9:55064 in via eth1
Jul  8 11:02:13 jHost1 kernel: ipfw: 805 Count TCP 192.168.1.4:80 192.168.0.130:51187 in via eth1
Jul  8 11:02:15 jHost1 kernel: ipfw: 6 Count TCP 192.168.0.130:51187 192.168.1.4:80 out via eth1
Jul  8 11:02:15 jHost1 kernel: ipfw: 802 Count TCP 192.168.1.9:55064 192.168.1.4:80 out via eth1
WireShark on client's machine doesn't detect pptp connection (it sees only wired Local Connection) for some unknown reason (installed and running at admin account), but I'll try to use *nix on client's side instead Windows - maybe this reveal more information about tcp working.
 
To give the client access to internal network I run on host “route add 192.168.0.0/24 192.168.1.9”
Did you run the command on FreeBSD host?


Jul 8 11:02:10 jHost1 kernel: ipfw: 6 Count TCP 192.168.0.130:51187 192.168.1.4:80 out via eth1
Jul 8 11:02:10 jHost1 kernel: ipfw: 802 Count TCP 192.168.1.9:55064 192.168.1.4:80 out via eth1
Jul 8 11:02:10 jHost1 kernel: ipfw: 8 Count TCP 192.168.1.4:80 192.168.1.9:55064 in via eth1
Jul 8 11:02:10 jHost1 kernel: ipfw: 805 Count TCP 192.168.1.4:80 192.168.0.130:51187 in via eth1
In my opinion NAT seems like works fine.
line 6 - original packet, out
line 802 - packet after the NAT, out
line 8 - reply from destination host to the NAT-IP
line 805 - reply, final packet after final NAT translation.

I have some questions about your client's machine 192.168.0.130.
So you have PC with windows OS and PPTP.
As far as I know, PPTP can't provide routing information.
Do you use PPTP connection on client PC as default gateway?
Show me the output of windows tracert -d command for specified IPs: 8.8.8.8 192.168.1.9 192.168.1.4
 
Did you run the command on FreeBSD host?
Yes, on mother host which containing jails.
Do you use PPTP connection on client PC as default gateway?
Show me the output of windows tracert -d command for specified IPs: 8.8.8.8 192.168.1.9 192.168.1.4
Yes, in PPTP connection properties option "use default gateway on remote network" is enabled.
Routing table before PPTP connection:
1.jpg

and after:
2.jpg

tracert -d 8.8.8.8:
Code:
1 <1 мс <1 мс <1 мс 192.168.1.129
2 1 ms <1 мс <1 мс 192.168.1.1
3 1 ms * *                some_secret_IP

4 4 ms 1 ms 1 ms
5 10 ms 9 ms 10 ms
6 * * *
7 15 ms 10 ms 10 ms
8 11 ms 11 ms 11 ms 108.170.248.146
9 39 ms 39 ms 39 ms 216.239.46.121
10 39 ms 39 ms 39 ms 216.239.35.133
11 45 ms 49 ms 53 ms 142.250.37.209
12 41 ms 41 ms 41 ms 142.250.238.3
 13 39 ms 41 ms 39 ms 8.8.8.8
tracert -d 192.168.1.9:
Code:
1 1 ms 1 ms <1 мс 192.168.1.9
tracert -d 192.168.1.4:
Code:
1 1 ms <1 мс <1 мс 192.168.1.129
2 1 ms 1 ms <1 мс 192.168.1.4
 
That's desired result (in order to clarify situation):
Code:
pptp client 192.168.0.130 <-> vpn-server 192.168.1.9 <-> jail host 192.168.1.1+real_external_ip <-> Internet
At first I made double NAT (ipfw) - traffic from client was nat-ed on vpn server, passed to jHost as if from vpn server itself, where it nat-ed repeatedly on ext_interface to outer web. Exactly by this scheme vpn server itself obtain direct access to Internet, excluding nat on its own interface of course. And when this scheme turned out to be inoperative for TCP traffic from vpn client, in search of a solution to the problem, I decided to route traffic from vpn client to local web resource without NAT by adding corresponding routes. I can do in such way with traffic to local resources but not to Internet, and by using this two ways of delivering traffic to web server I found that problem is exactly in NAT after vpn tunnel. UDP+PPTP+ipfwNAT=ok, TCP+PPTP=ok, TCP+ipfwNAT=ok, and TCP+PPTP+ipfwNAT=problem.
 
I have read all the topic and I have found some disadvantages.

1) Is there a reason for using 192.168.0 instead of 192.168.1 to PPTP client?
In my opinion. client's PC can simple use 192.168.1 network without additional NAT.
Your MPD5 config has already included proxy-arp option, so client will be like inside your local network.

2) Your previous posts and configs still use NAT for local networks 192.168.0 - 192.168.1
Try to avoid the NAT for local IPs, use routing instead of.
ipfw -q add 00005 allow ip from 192.168.0.0/24 to 192.168.1.9
Should be like
Code:
Should be like ip from 192.168.0.0/24 to 192.168.1.0/24

ipfw -q add 00800 nat 1 ip4 from 192.168.0.130 to 192.168.1.4 out xmit eth1
Should be like nat 1 ip4 from 192.168.0.130 to not 192.168.0.0/24,192.168.1.0/24 out xmit eth1
It is important for avoiding the NAT.

set ipcp ranges 192.168.1.129/25
Is there a wrong netmask in the mpd5 config? I predict some network issues with different masks within one network.

And finally:
what about routing table on 192.168.1.4?
Show me the traceroute from 192.168.1.4 to 192.168.0.130.

Thanks!
 
I started to compose an answer to last questions, changed client address to 192.168.1.130 (instead of 192.168.0.130) to simplify situation, and suddenly realised that I cannot ping pptp client nor from host machine nor from vpn server. From pptp client icmp packets pass to any internal ip (192.168.1.0/24) and back without any problems, but from host or vpn server to client IP icmp packets are lost - and not all of them, approx 1 of 100 packets yet passes back.
I found that problem is in a host firewall, no matter what rules are in it:
Code:
ipfw -f flush
ipfw -f pipe flush
ipfw -f queue flush

ipfw add 00001 allow ip from any to any
If firewall with the above rules is enabled, ping is ok only from client side, if host firewall is disabled - everything starts working (inside local network of course), ICMP pakets pass from both sides and even with enabled firewall with nat rules in vpn jail i'm starting to get an answer on curl 192.168.1.4 from client machine.

What could be the problem? Here is a list of loaded modules on jailHost (with generic kernel):
Code:
# kldstat
Id Refs Address                Size Name
1   62 0xffffffff80200000  227ae70 kernel
2    1 0xffffffff8247b000     8410 ng_l2tp.ko
3   11 0xffffffff82484000    17b60 netgraph.ko
4    1 0xffffffff8249c000   3bad38 zfs.ko
5    2 0xffffffff82857000     a448 opensolaris.ko
6    1 0xffffffff82863000     62d8 ng_ksocket.ko
7    1 0xffffffff8286a000     2128 ng_tcpmss.ko
8    1 0xffffffff8286d000     9df0 ng_ppp.ko
9    1 0xffffffff82877000     6ce0 ng_socket.ko
10    1 0xffffffff8287e000     22b0 ng_tee.ko
11    1 0xffffffff82881000     4c00 ng_iface.ko
12    1 0xffffffff82886000     4c28 ng_ether.ko
13    1 0xffffffff8288b000     6498 ng_mppc.ko
14    2 0xffffffff82892000      be0 rc4.ko
15    1 0xffffffff82b23000     1860 uhid.ko
16    1 0xffffffff82b25000     2908 ums.ko
17    2 0xffffffff82b28000    25248 ipfw.ko
18    1 0xffffffff82b4e000     2430 ipfw_nat.ko
19    1 0xffffffff82b51000     a652 libalias.ko
20    1 0xffffffff82b5c000      acf mac_ntpd.ko
21    1 0xffffffff82b5d000     2940 nullfs.ko
22    1 0xffffffff82b60000     1a20 fdescfs.ko
23    1 0xffffffff82b62000     88c0 tmpfs.ko
24    1 0xffffffff82b6b000     191c if_epair.ko
25    1 0xffffffff82b6d000     7000 if_bridge.ko
26    1 0xffffffff82b74000     4038 bridgestp.ko
27    1 0xffffffff82b79000     2df0 ng_pptpgre.ko
 
I cannot ping pptp client
Are you have enabled ECHO-reply on your windows? As far as I know, any modern windows do not reply on ping.

if host firewall is disabled - everything starts working
Firewall disabled using the rule like "ipfw add 00001 allow ip from any to any"?

Your goal is NAT for PPTP client with access to some local resources.
So try to add only simple NAT rules and look for a result.
Code:
###remove temp rule 1 allow all ###
nat 1 ip4 from 192.168.0.130 to not 192.168.0.0/24,192.168.1.0/24
nat 1 ip4 from any to EXT_IP
allow [log logamount 0] all from any to any

Use ipfw -at list for watching which rules are working.
 
allow protocol 47 for gre
In my test firewall (I can awhile switch to it) on jailHost there is only one single rule: allow ip from any to any. Without any nat, keep-state and so on. Doesn't this rule include gre packets? On vpn server (which is vnet jail) firewall type is "open". Anyway, adding rule ipfw add 00002 allow gre from any to any did not give any result.
Are you have enabled ECHO-reply on your windows? As far as I know, any modern windows do not reply on ping.
Windows machine start to reply on ping as soon as I disable ipfw on host machine - therefore it's unlikely that problem is on client side.
 
Last version of mpd.conf:
Code:
default:
#       load l2tp_server
        load pptp_server
l2tp_server:
....
pptp_server:

# Define dynamic IP address pool.

# Create clonable bundle template named B_pptp
        create bundle template B_pptp
        set iface enable proxy-arp
        set bundle disable compression
        set iface enable tcpmssfix
        set ipcp no vjcomp
# Specify IP address pool for dynamic assigment.

# Create clonable link template named P_pptp
        create link template P_pptp pptp
# Set bundle template to use
        set ccp yes mppc
        set mppc yes e40
        set mppc yes e128
        set mppc yes stateless
        set link action bundle B_pptp
        set link keep-alive 10 60
# Multilink adds some overhead, but gives full 1500 MTU.
        set link yes acfcomp protocomp
        set link enable multilink
        set link no pap chap eap
        set link enable chap-msv2
# We can use use RADIUS authentication/accounting by including
# another config section with label 'radius'.
        load radius
# We reducing link mtu to avoid GRE packet fragmentation.
        set link mtu 1460
# Configure PPTP
        set pptp self 10.41.2.26
# Allow to accept calls
        set link enable incoming

###########################################################
#set radius config /usr/home/nas/conf/radius.conf
#radius stuff here
Ifconfig on vpn server with connected pptp client:
Code:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
eth0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 00:a0:98:28:7c:a7
        hwaddr 02:0d:57:64:a9:0b
        inet 10.41.2.26 netmask 0xffffff00 broadcast 10.41.2.255
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
eth1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 00:a0:98:8b:7b:db
        hwaddr 02:5f:cb:1c:fc:0b
        inet 192.168.1.9 netmask 0xffffff00 broadcast 192.168.1.255
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
ng0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 0 mtu 1400
        inet 10.41.2.26 --> 192.168.1.130 netmask 0xffffffff
        inet6 fe80::2a0:98ff:fe28:7ca7%ng0 prefixlen 64 scopeid 0x4
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
traceroute -d 192.168.1.130 on jailHost machine with disabled ipfw:
Code:
traceroute to 192.168.1.130 (192.168.1.130), 64 hops max, 40 byte packets
 1  192.168.1.9 (192.168.1.9)  0.092 ms  0.096 ms  0.049 ms
 2  192.168.1.130 (192.168.1.130)  1.275 ms *  1.013 ms
 
Firewall disabled using the rule like "ipfw add 00001 allow ip from any to any"?
Nope. Even with such firewall on jHost pptp client doesn't reply on ping from jHost or vpn server. Only after executing ipfw disable firewall on jHost I start to receive reply packets on ping 192.168.1.130
.
 
Code:
Jul  8 11:02:10 jHost1 kernel: ipfw: 6 Count TCP 192.168.0.130:51187 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 802 Count TCP 192.168.1.9:55064 192.168.1.4:80 out via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 8 Count TCP 192.168.1.4:80 192.168.1.9:55064 in via eth1
Jul  8 11:02:10 jHost1 kernel: ipfw: 805 Count TCP 192.168.1.4:80 192.168.0.130:51187 in via eth1
Sorry for interrupting - how did you get ipfw logging to work??? According to the mailinglists it is not implemented, and I did some fight and didn't achieve to make it work, neither on Rel 11 nor 12. And that is a really major pain - I would be grateful for that fix!

Concerning the topic: I have rather similar configuration up and running, with few major differences: 1) I am using just openvpn for the VPN, and 2) I don't have a NAT for these addresses - nat would need to translate private addresses to other private addresses, and that is not what it is primarily designed for, so there are a few caveats to get that right. And 3) I don't use a mixture of epair and netgraph, but do everything with netgraphs.

The bad news is: I don't have an instant solution to make such things working, either. When something doesn't work, I do fix it step by step: understand the path a packet should take, tcpdump at each step, add ipfw logging before and after each relevant ruleset, and then with that data see where the packets actually go, or go astray.
 
Sorry for interrupting - how did you get ipfw logging to work??? According to the mailinglists it is not implemented, and I did some fight and didn't achieve to make it work, neither on Rel 11 nor 12. And that is a really major pain - I would be grateful for that fix!
As I understand ipfw logging in vnet jails is implemented, but corresponding log messages are writing in /var/log/security on mother host instead of jail itself. It is not a problem for me because I am owner both jails and host, just spent a few minutes to find that messages.
 
As I understand ipfw logging in vnet jails is implemented, but corresponding log messages are writing in /var/log/security on mother host instead of jail itself. It is not a problem for me because I am owner both jails and host, just spent a few minutes to find that messages.
Yes, thank You - that is exactly my observation, too. So You collect them on the host, by the host's syslogd - sure, that does work (but would be too much stuff accumulating in my case.) I looked at that "jHost1" tag and thought You had found some means to natively log within the vnet jail (except the usually recommended way of using firewall_logif with ipfw0, which sadly does not log the rule-number).
 
Just tried to use OpenVPN instead mpd5, and all my scheme - vpn client<->vpn server+NAT<->jailHost+NAT<->WAN - start working with a half-turn. All my ipfw rules in vpn jail and jail host work as expected, there are no errors nor in ipfw nor in jails nor in common logic.
I would prefer to use pptp connection because it realized by native Windows tools without key/sertificate generation and so on, but TCP traffic from pptp (and L2TP) client becomes non-working after ipfw NAT (I suspect because of gre packets), so it seems I have no chanсes with mpd5.
 
I had some tіme to play with mpd5 and nat.
I tried to check how is ipfw_nat works with MPD5_PPTP client.

First of all I have build the network map similar to your.

Scheme:
20210718-freebsd-mpd5-and-nat.png

Server1 MPD5+NAT: FreeBSD 11.1-RELEASE-p4; mpd5-5.8_2
NAT configuration:
Code:
ipfw nat 1 config ip 10.99.9.1
nat 1 ip from 10.99.10.99 to 192.168.0.0/24
nat 1 ip from 192.168.0.0/24 to 10.99.9.1
I forced the IP 10.99.10.99 for PPTP client.
Server don't have any interface with network like 10.99.10.0/24.
So I have received in mpd.log while connecting:
Code:
   10.99.9.1 -> 10.99.10.99
IFACE: No interface to proxy arp on for 10.99.10.99
IFACE: Up event

Server2 has opened port TCP 22 on 192.168.0.1.

ClientPC:
I have used telnet command to check TCP connection on port 22 (ssh) of Server2.
When I have entered command telnet 192.168.0.1 22 then I have received remote SSH banner inside telnet screen
and I have pressed some random keys in active telnet session.
In the same time on Server2 I saw a record in auth.log : " sshd[26946]: Bad protocol version identification 'uygygyyggy' from 10.99.9.1".

I am sure that the NATed TCP connection originated from PPTP client works fine for me.
You can try to simplify your network configuration to have a chance to find an issue.

P.S.
Default installation of windows 7 don't reply to any ICMP-ping query, until I have enabled a specified rule of windows firewall:
"Общий доступ к файлам и принтерам (эхо-запрос - входящий трафик ICMPv4)"
 
I found the source of my problem.
It is in jailHost's ipfw. When pptp client (192.168.0.130) is connected to vpn server (192.168.1.9), and host's ipfw is enabled:
Code:
ipfw -f flush
ipfw -f pipe flush
ipfw -f queue flush

ipfw add 00003 allow gre from any to any
ipfw add 00010 allow ip from any to any
ipfw add 00011 allow udp from any to any
ipfw add 00012 allow tcp from any to any
I get no answer on ping 192.168.0.130 from vpn server. If I disable ipfw on host system (with ipfw disable firewall) - ping 192.168.1.9->192.168.0.130 is OK, and curl 192.168.1.4 from pptp client is OK.
That is to say that the problem is not even with the ipfw rules on jailHost (allow all from any to any) but in simple enabling of ipfw firewall on host system.
 
  • Thanks
Reactions: im
Back
Top