pf.conf 4Gbps +

Hi,
Is there any plans for implementing update to the packet filter so PF queues break the 4 Gbps barrier? I was reading about OpenBSD work in progress but I wonder if there are any plans for FreeBSD?
Thanks,
Seb
 
Isn't packet-pushing capability a function of hardware speed more than software? On my own dinky APU2C4 router I can push close to a Gigabit through PF already, and that's a tiny embedded firewall box from 10 years ago. I don't have the kit handy here, but I'm quite sure any half-decent modern AMD/Intel platform with a reasonable NIC will blow right past 4Gbps of PF filtering capability.
 
A really minimally quick search indicates the pf limit was due to using a 32bit counter in the HFSC scheduler. Along with locking changes, part of the fix was changing them to 64bit integers.
Locking: FreeBSD implementation of PF differed from OpenBSD version simply because there is a difference in SMP between the two BSDs.
I don't know how closely FreeBSD tracks OpenBSD pf (Kristof Provost is the authoritative voice on that) but I would speculate that something like changes in HFSC scheduler are at least looked at.
 
With Internet Service Providers offering faster speeds I would imagine there is some work already on it.
I don't disagree/doubt that, but the emails about the commits and diffs, a lot really seemed to be "can't represent 10G in a 32bit uint, so all the stats and calculations are broken".
A lot of residential taps out at 1G (Yes I know there are some areas where 10G is available) so hard to test. A business may have higher so would be able to test.

Now, I come from the days when a business worked just fine have T1 bandwidth to the "Internet" so 1G is huge bandwidth.
 
It already does depending on what hardware you use and of course amount of rules
there are no queues involved in whatever this is.

To be clear: PF in FreeBSD and OpenBSD could easily push packets at pretty much any line-speed. I'm doing it @25 and 40gbps in production with PF running in jails and OpenBSD runnig on bhyve. (There have been large fluctuations in throughput and some regressions throughout various FreeBSD versions/patch-levels especially for the latter scenario though...)
The limitation and changes OP is refering to are specifically for queueing with the HFSC scheduler, which has been using 32bit u_int until now and hence was silently capped at ~4Gbps for a single queue.

IIRC there is still no queueing supported on FreeBSD *out of the box* as HFSC requires to build a custom kernel. I suspect HFSC is still based on the original code ported from OpenBSD, so it very likely has the same limitations.


Since this vital information was missing from the initial post; here is the link to the article in the OpenBSD journal with further links to the original discussion on the mailing list: https://undeadly.org/cgi?action=article;sid=20260319125859
 
IIRC dummynet and its tooling was primarily built around/for ipfw, the PF integration seems somewhat "hacky" - but TBH queueing with PF on FreeBSD always felt like somewhat of an afterthought/hack compared to OpenBSD. It is one of the reasons (amongst some others, e.g. routing domains) why I'm using mostly OpenBSD for routing at the edge and for more complicated (routing) scenarios e.g. where routing domains are beneficial (which is far superior and IMHO more approachable and easier to grasp than the multi-FIB approach on freebsd and its associated pf.conf syntax and daemon configuration).
OTOH freebsd PF also has its strengths in some niches of PF/routing - so it's always a case of "choose the right tool for the job".
 
I don't know how closely FreeBSD tracks OpenBSD pf (Kristof Provost is the authoritative voice on that) but I would speculate that something like changes in HFSC scheduler are at least looked at.

FreeBSD (main) has basically all current OpenBSD pf changes at this point. I just imported a few more patches from OpenBSD's pf in the last few days. OpenBSD patches from the last couple of weeks.
 
What's hacky about it?
The first thing that comes to mind: queues have to be created with dnctl, which is actually just an alias for ipfw - at least this is still what pf.conf(5) states. It's somewhat odd that you have to use "the other" firewall to set up the queues and cannot just set them up in the pf.conf as with HFSC queues and/or on OpenBSD.
I had a look at some recent examples an combined with the dnctl service it makes more sence now (not sure if this wasn't available at first or I completely missed it).

TBH, I only had a brief look at dummynet and its queueing capabilities shortly after this was introduced (and used dummynet once for troubleshooting by leveraging its simulated package loss and delay), then struggled quite a bit at some points to actually get it working as intended, which I think may also had to do with the fact that I didn't relly get my head around the pipes/queues concept and tried to treat it like 'classic'/hfsc-style queueing. I also make heavy use of queues in combination with rdomains on OpenBSD, so I have a rather specific mindset when it comes to queueing which might have gotten in the way...

I just looked at some recent blog posts and examples regarding dummynet queueing with PF and I really think I should give this another try (this time, starting with a dead simple, bare-bones configuration with no bells and whistles). Especially FQ-CoDel may be just what I needed for years on a flaky (in terms of bandwidth and jitter) link where our VPN sometimes acts up... It's BTW the same link I've once tried to use dummynet to troubleshoot those issues to no avail.
So instead of me coming up with valid examples why I disregarded it as "hacky" back then, I guess I have to thank you for finally making me take another look at it😅
 
Back
Top