ISP switch from Cable/DHCP to Fiber/PPPoE

obsigna

Profile disabled
I switched the ISP from 10 MBit/s cable to 200 MBit/s fiber. The modems of both ISPs can be configured to bridge mode.

I was somewhat disappointed for having to deal with the PPP dinosaur once again, I thought this was a DSL left behind from 20 years ago. And now the FreeBSD handbook lets us use an almost 30 years old PPP implementation, which pushes the CPU load up to 10 %, depending on the WAN traffic - horrific. Fortunately we got net/mpd5. With that I have it working reasonably now, and according to top(1) no significant CPU load is imposed by WAN network traffic.

My recommendation to everybody in a similar situation is now, forget ppp(8), and ignore the PPPoE chapter of the handbook.

Here is my /usr/local/etc/mpd5/mpd.conf:
Code:
startup:
# configure mpd users
    set user <ADMIN_PASSWORD> <ADMIN_USER_NAME> admin
# configure the console
    set console self 127.0.0.1 5005
    set console open
# configure the web server
    set web self 192.168.0.1 5006
    set web open

default:
    load pppoe_client

pppoe_client:
#
# PPPoE client: only outgoing calls, auto reconnect,
# ipcp-negotiated address, one-sided authentication,
# default route points on ISP's end
#
    create bundle static B_pppoe
    set iface name ppp0
    set iface description WAN
    set iface up-script "/root/config/ppp-up.sh"
    set iface enable tcpmssfix
    set iface route default
    set ipcp ranges 0.0.0.0/0 0.0.0.0/0
 
    create link static L_pppoe pppoe
    set link action bundle B_pppoe
    set auth authname <PPPoE_USERNAME>
    set auth password <PPPoE_PASSWORD>
    set link max-redial 0
    set link keep-alive 10 60
    set pppoe iface re0
    set pppoe service ""
    open

Here is the iface-up script:
Code:
#!/bin/sh

NEWIP=`echo $3 | cut -d/ -f1`
/root/config/dyndns-update.sh $NEWIP

In the whole adjacent LAN it is good have the MTU set to 1492.
 
mpd5 is way better/faster/stable then ppp! It's a shame it's not even mentioned in the handbook as an alternative then ppp.
 
Thank you for making this post, obsigna.
I was actually trying to get a FreeBSD box to communicate with my ISP's PPPoE a few weeks ago but had no luck using ppp(8). I will certainly give net/mpd5 a try.

A question: Do you use this configuration in a scenario where your ISP provides a PPPoE link with a VLAN tag? I think the problems I encountered were mainly related to the VLAN tagging.
 
A question: Do you use this configuration in a scenario where your ISP provides a PPPoE link with a VLAN tag? I think the problems I encountered were mainly related to the VLAN tagging.
I don't know. I put the fiber modem box directly in bridge mode and then first I tried PPPoE according to the handbook, and this worked as well. The FreeBSD box is a the router/firewall from our home LAN into the internet, and I was going to leave it like that, but then my son started gaming Apex Legend on his PC and suddenly the ppp(8) daemon on the home server imposed a considerable load on the CPU. This led me to switch to mpd5. I cannot remember any info/setting telling something about VLAN.

Here is the log of the last session start, perhaps this helps to identify the problem on your site:
Code:
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] Link: OPEN event
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: Open event
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: state change Initial --> Starting
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: LayerStart
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] PPPoE: Connecting to ''
Jun 16 17:59:22 server mpd[2964]: PPPoE: rec'd ACNAME "BR_SBOPL_CG4"
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] PPPoE: connection successful
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] Link: UP event
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: Up event
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: state change Starting --> Req-Sent
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: SendConfigReq #1
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   PROTOCOMP
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MRU 1492
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MAGICNUM 0x65873654
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: rec'd Configure Request #2 (Req-Sent)
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MRU 1492
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   AUTHPROTO PAP
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MAGICNUM 0xa235d90b
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: SendConfigAck #2
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MRU 1492
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   AUTHPROTO PAP
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MAGICNUM 0xa235d90b
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: state change Req-Sent --> Ack-Sent
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: rec'd Configure Reject #1 (Ack-Sent)
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   PROTOCOMP
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: SendConfigReq #2
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MRU 1492
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MAGICNUM 0x65873654
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: rec'd Configure Ack #2 (Ack-Sent)
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MRU 1492
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MAGICNUM 0x65873654
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: state change Ack-Sent --> Opened
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: auth: peer wants PAP, I want nothing
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] PAP: using authname "<USERNAME>"
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] PAP: sending REQUEST #1 len: 28
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: LayerUp
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] PAP: rec'd ACK #1 len: 36
Jun 16 17:59:22 server mpd[2964]: [L_pppoe]   MESG: Authentication success,Welcome!
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] LCP: authorization successful
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] Link: Matched action 'bundle "B_pppoe" ""'
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] Link: Join bundle "B_pppoe"
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] Bundle: Status update: up 1 link, total bandwidth 64000 bps
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: Open event
Jun 16 17:59:22 server charon[1155]: 07[KNL] interface ppp0 appeared
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: state change Initial --> Starting
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: LayerStart
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: Up event
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: state change Starting --> Req-Sent
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: SendConfigReq #1
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   IPADDR 0.0.0.0
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   COMPPROTO VJCOMP, 16 comp. channels, no comp-cid
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: rec'd Configure Request #1 (Req-Sent)
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   IPADDR 187.100.231.61
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]     18x.xxx.xxx.xxx is OK
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: SendConfigAck #1
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   IPADDR 18x.xxx.xxx.xxx
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: state change Req-Sent --> Ack-Sent
Jun 16 17:59:22 server mpd[2964]: [L_pppoe] rec'd unexpected protocol IPV6CP, rejecting
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: rec'd Configure Reject #1 (Ack-Sent)
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   COMPPROTO VJCOMP, 16 comp. channels, no comp-cid
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: SendConfigReq #2
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   IPADDR 0.0.0.0
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: rec'd Configure Nak #2 (Ack-Sent)
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   IPADDR 19x.xxx.xxx.xxx
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]     19x.xxx.xxx.xxx is OK
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: SendConfigReq #3
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   IPADDR 19x.xxx.xxx.xxx
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: rec'd Configure Ack #3 (Ack-Sent)
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   IPADDR 19x.xxx.xxx.xxx
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: state change Ack-Sent --> Opened
Jun 16 17:59:22 server mpd[2964]: [B_pppoe] IPCP: LayerUp
Jun 16 17:59:22 server mpd[2964]: [B_pppoe]   19x.xxx.xxx.xxx-> 18x.xxx.xxx.xxx
Jun 16 17:59:22 server charon[1155]: 07[KNL] 19x.xxx.xxx.xxx appeared on ppp0
Jun 16 17:59:28 server mpd[2964]: [B_pppoe] IFACE: Up event
Jun 16 17:59:28 server mpd[2964]: [B_pppoe] IFACE: Rename interface ng0 to ppp0
Jun 16 17:59:28 server mpd[2964]: [B_pppoe] IFACE: Add description "WAN"
 
This configuration is needed for a SERVICE and not just one's own use?
This is a client setup. Actually it doesn't matter. Here I use it on a FreeBSD box which is the router/firewall of our home LAN into the internet. For NAT and the firewall, I utilize ipfw(8). Anyway, the PPPoE part would work exactly the same on a single computer. Perhaps, for singlet usage you don't need the iface up-script which I utilize to update the IP's of my domains at the domain hoster.
 
suddenly the ppp(8) daemon on the home server imposed a considerable load on the CPU.
This is symptomatic. The ppp daemon is a single threaded application. It needs a single core high CPU hertz.
mpd5 alievates that bottleneck.

Works as Intended.
So, one just should make sure it has "net.isr.maxthreads" and "net.isr.numthreads" greater than 1 and switch net.isr.dispatch to "deferred" value that permits NIC drivers to use netisr(9) queues to distribute load between CPU cores.
 
This is symptomatic. The ppp daemon is a single threaded application. It needs a single core high CPU hertz.
mpd5 alievates that bottleneck.

Works as Intended.
I tend not to take the "threaded vs. non-threaded" explanation because it does not fit to the observations.

In the very moment, the home server is connected to the internet on a 200 MBit/s fiber connection via PPPoE using mpd5. My son is playing Apex Legend, which is a network hog, and I opened the traffic shaping pipes for this test. The network adapter to the modem is an internal Realtec 1 GBit/s one. This is a low end dual core Atom 1.6 Ghz machine, and according to top the processor cores are almost in idle state 99.3 to 100.0 %

Code:
CPU: Intel(R) Atom(TM) CPU D510   @ 1.66GHz (1666.75-MHz K8-class CPU)

Code:
last pid:  1323;  load averages:  0.11,  0.09,  0.08                                                                                                                                                                                    up 0+00:29:52  21:51:00
58 processes:  1 running, 57 sleeping
CPU:  0.2% user,  0.0% nice,  0.4% system,  0.0% interrupt, 99.4% idle
Mem: 282M Active, 24M Inact, 324M Wired, 147M Buf, 1306M Free
Swap: 4096M Total, 4096M Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
 1323 root          1  20    0    14M  3748K CPU1     1   0:00   0.19% top
 1286 root          1  20    0    21M  9544K select   1   0:00   0.03% sshd
  994 www           6  20    0   162M   103M select   3   0:17   0.02% python3.8
  972 root          1  20    0    13M  2580K select   2   0:00   0.01% mountd
 1211 root          1  20    0    13M  2648K nanslp   2   0:00   0.01% cron
 1030 ntpd          1  20    0    21M  6312K select   1   0:00   0.01% ntpd
  413 root          2  20    0    24M  6848K select   2   0:00   0.01% mpd5
 1263 squid         1  20    0   103M    47M kqread   3   0:10   0.01% squid
 1231 root          1  20    0    62M    27M select   3   0:00   0.01% httpd
 1123 root          1  20    0    13M  3312K kqread   1   0:00   0.00% netatalk
  983 root          1  20    0   269M  2472K select   3   0:00   0.00% rpc.statd
  944 root          1  20    0    13M  2456K select   0   0:00   0.00% rpcbind
  777 root          1  20    0    13M  2296K select   0   0:00   0.00% nfsuserd
  780 root          1  20    0    13M  2296K select   1   0:00   0.00% nfsuserd
  755 unbound       1  20    0    49M    35M select   3   0:02   0.00% local-unbound
 1188 root          1  20    0   134M   104M select   1   0:01   0.00% smbd
 1139 root         17  47    0    56M    13M sigwai   1   0:00   0.00% charon
 1111 pgsql         1  20    0   173M    22M select   3   0:00   0.00% postgres
 1219 root          4  52    0    21M  8672K accept   3   0:00   0.00% ProjectStore
 1185 root          1  20    0    41M    15M select   2   0:00   0.00% nmbd
  918 root          1  20    0    13M  2728K select   0   0:00   0.00% syslogd
 1283 squid         1  52    0    18M  7384K sbwait   3   0:00   0.00% security_file_certg
 1288 root          1  20    0    15M  4600K wait     2   0:00   0.00% bash
 1114 pgsql         1  20    0   173M    22M select   1   0:00   0.00% postgres
 1251 www          27  50    0    74M    28M piperd   0   0:00   0.00% httpd
 1285 squid         1  45    0    14M  3404K sbwait   0   0:00   0.00% log_file_daemon
 1167 dhcpd         1  20    0    25M    11M select   1   0:00   0.00% dhcpd
 1115 pgsql         1  20    0   173M    22M select   2   0:00   0.00% postgres
 1127 root          1  20    0    13M  2640K select   2   0:00   0.00% mDNSResponderPosix
 1284 squid         1  52    0    12M  2116K sbwait   2   0:00   0.00% nukie
 1254 root          1  21    0    18M  6636K select   1   0:00   0.00% afpd
 1116 pgsql         1  20    0   173M    23M select   2   0:00   0.00% postgres
 1229 root          1  20    0   134M   105M select   3   0:00   0.00% smbd
 1117 pgsql         1  20    0    31M    12M select   3   0:00   0.00% postgres
  979 root          1  52    0    12M  2260K accept   2   0:00   0.00% nfsd
 1252 www          27  49    0    70M    28M piperd   2   0:00   0.00% httpd
 1253 www          27  50    0    70M    28M piperd   0   0:00   0.00% httpd
  986 root          1  52    0    13M  2468K rpcsvc   2   0:00   0.00% rpc.lockd
 1256 squid         1  52    0    59M    16M wait     2   0:00   0.00% squid
 1227 root          1  20    0   131M   101M select   0   0:00   0.00% smbd
 1272 root          1  47    0    13M  2292K ttyin    3   0:00   0.00% getty
 1274 root          1  47    0    13M  2292K ttyin    0   0:00   0.00% getty
 1255 root          1  20    0    13M  3092K select   1   0:00   0.00% cnid_metad
  773 root          1  20    0    21M  8200K select   1   0:00   0.00% sshd
 1273 root          1  47    0    13M  2292K ttyin    2   0:00   0.00% getty
 1228 root          1  20    0   131M   101M select   2   0:00   0.00% smbd
 1275 root          1  48    0    13M  2292K ttyin    0   0:00   0.00% getty
 1113 pgsql         1  20    0   173M    22M select   3   0:00   0.00% postgres
 1138 root          1  42    0    14M  3740K select   1   0:00   0.00% starter
  982 root         32  52    0    12M  2672K rpcsvc   3   0:00   0.00% nfsd
  589 root          1  20    0    11M  1432K select   0   0:00   0.00% devd
 1118 pgsql         1  20    0   173M    22M select   2   0:00   0.00% postgres
 1241 root          1  52    0    14M  3348K select   1   0:00   0.00% ftpd
  776 root          1  52    0    13M  2296K pause    3   0:00   0.00% nfsuserd
  779 root          1  20    0    13M  2296K select   1   0:00   0.00% nfsuserd
  778 root          1  20    0    13M  2296K select   1   0:00   0.00% nfsuserd
  993 www           1  20    0    13M  2204K piperd   1   0:00   0.00% daemon
 1144 svn           1  52    0    24M  8928K accept   2   0:00   0.00% svnserve

ipfw -d show | grep "STATE tcp 192.168.0.12" | wc -l = 51

51 TCP connections are open by this gaming machine, and the processor of the server is almost in idle state. While the ppp daemon would have imposed a load of 10 %. If ppp would be threaded and use the 4 HT cores, then it would still be 2.5 % per core, wouldn't it.

I guess, that mpd5 employs a different mechanism which is by far superior to what ppp does.
 
Back
Top