PPPoE RX traffic is limited to one queue

I recently switched providers and very unfortunate went from a non-pppoe to a pppoe fiber provider. (if only I knew this at for hand)
In any case I have the same issue described here from 2015 - https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856

I have tried all tuneables I could find,

net.inet.rss.enabled: 1
net.isr.dispatch: deferred
net.isr.bindthreads: 1
net.isr.maxthreads: -1

And although it makes a different, I get 349 Mpbs Down and 511 up (should be 500 / 500)
I'm not there yet and it does seem to be that there is only core handling the ip queue.

A very very old test showed that NetBSD had better performance on the connection with PPPoe do / did they have another implementation?

And Linux used to have the same problem, but it seems it got solved when rp-pppoe went kernel mode instead of user.

As speeds are getting higher and higher and the implementation on BSD is implemented as it should work, but it most cases seems to not be sufficient.

Is this something maybe worth to fix?

Besides above I was actually wondering, why the TX is not limited to the same issue.
And seems to go over more cores?
 
Recently got connected to fiber. Had to use PPPoE over a VLAN to get it to work. Getting around 900Mbps up and down through a FreeBSD host, close to the max of 1Gbps. So what are you using? ppp(8) or net/mpd5? I really recommend the latter. While ppp(8) worked fine, it was using a lot of CPU doing it.
 
Recently got connected to fiber. Had to use PPPoE over a VLAN to get it to work. Getting around 900Mbps up and down through a FreeBSD host, close to the max of 1Gbps. So what are you using? ppp(8) or net/mpd5? I really recommend the latter. While ppp(8) worked fine, it was using a lot of CPU doing it.

Hmm thats strange, I'm using mpd5 on FreeBSD 13.1-RELEASE-p8
 
Mainboard and CPU are going to play a big factor though. My previous firewall wasn't able to handle the connection at all. Hit a hardware ceiling somewhere around the 300Mbps mark. Bought a new mainboard, new CPU, etc. before I was able to hit the max throughput of the connection.
 
Well it is strange I see with RX that one core is maxed out and with TX its distributed over 4 cores.
 
Back
Top