IPFW IPFW + dummynet with fq_codel halves download speed?

I've recently(ish) switched from PF to IPFW because I wanted to use dummynet with fq_codel to fix some rather massive bufferbloat issues I'm seeing on my cable Internet connection with one of the US Cable Monsters. Cable modem is set to pass-through, so the only firewall between me an the internet is ipfw.

The dummynet pipe rules look like this:
Code:
# Add limiter pipes for buffer bloat elimination
${fwcmd} pipe 1 config bw 280MBit/s fq_codel
${fwcmd} pipe 2 config bw 12MBit/s fq_codel
${fwcmd} add pipe 1 ip from any to any in via ${wan_if}
${fwcmd} add pipe 2 ip from any to any out via ${wan_if}

Based on my understanding, this should set the incoming (download) bandwidth to about 280 MBit/s and the upload to about 12 MBit/s, with the default settings for fq_codel. The good news is that activating the above rules does indeed eliminate any buffer bloat on the download side, but it has the somewhat nasty side effect of reducing my measured download speed from approx 300MBit/s to about 130MBit/s. Both pre- and post-pipe measurements were taken against the DSLReports bufferbloat measuring site back to back.

I'm not super familiar with dummynet pipes in general and the relatively new fq_codel code specifically, but is this reduction in download speed expected, or does it point either at a non-optimally configured pipe or hardware that can't keep up with the demands of traffic shaping?

I've got an IPV6 enabled connection if that makes any difference, and I am using ipv6 connectivity.

The current firewall configuration ex the dummynet configuration above can be found in this forum post.
 
The download speed problem is most likely an issue with the fq_codel interval timer.

The default interval is 100ms, and the ipfw(8) man page says this about it in the codel section:

"interval time should be set to maximum RTT for all expected connections."

I have a cable modem connection (Comcast) that normally does 350Mbps down and 30Mbps up. I tried out the pipe configuration above (but I used 345Mbps instead) and wound up with download speeds in the 130Mbps range with the DSLReports speed test, as you did. Download bufferbloat was fixed, though.

In looking at the original CoDel papers, there is a little more information about the setup they're using:


So, I tried putting in a pipe, queue and scheduler, and tweaking the parameters a bit. It seems like the interval was the parameter that affected the download speed the most. But, if you turn it up too high, bufferbloat goes up. The quantum seems to affect it as well.

I wound up using a configuration like this:

Code:
ipfw pipe 1 config bw 346Mbits/s
ipfw queue 1 config pipe 1 queue 100
ipfw sched 1 config queue 1 type fq_codel target 5ms quantum 6000 flows 2048 interval 300
ipfw add 450 queue 1 ip from any to any in recv em0

Obviously all of that would get tweaked for your configuration, especially the bandwidth and rule number in the last line.

That resulted in close to the target speed with reasonable bufferbloat. One interesting thing in this is that it results in better speed test download bandwidth numbers from closer (lower latency) speed test servers. Previously I would get similar numbers from different servers.

Here is ipfw sched show:

# ipfw sched show
00001: 346.000 Mbit/s 0 ms burst 0
q00001 100 sl. 0 flows (1 buckets) sched 1 weight 1 lmax 0 pri 0 droptail
sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active
FQ_CODEL target 5ms interval 300ms quantum 6000 limit 10240 flows 2048 ECN
Children flowsets: 1
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 23 14964 0 0 0


I'm using ALTQ with HFSC for the upstream connection, and I haven't tried fq_codel for that yet.
 
Code:
${fwcmd} add pipe 1 ip from any to any in via ${wan_if}
${fwcmd} add pipe 2 ip from any to any out via ${wan_if}
That ruleset is quite certainly wrong. in via means: packet that is currently incoming and is passing the via-netif. That one is definite. But out via means: packet that is currently outgoing and that is passing the via-if on the way in or on the way out.
so you are routing downloaded traffic back thru the upload pipe when it is moving onwards to another local node.

concerning traffic shaping: a standalone pipe seems not to work. it seems to need a queue feeding into a pipe, where the pipe creates a holdup and the queue sorts the stuff within that holdup.
 
Back
Top