PF dynamically divide band by devices using it

Hi!
I have an FreeBSD box as my network gateway. We are two families living under same roof, so we have an assorted number of devices using our internet link (adsl, 15mbps down/1.5mbps up) at same time. Sometimes one device is using almost all the upload bandwidth causing an horrible slow down on our network. I partially addressed this after reading an article on prioritizing empty TCP ACKs.
I wanna know if would be possible to divide available bandwidth between devices in use while allowing them to borrow free bandwidth. Ex:

set global upload limit to 1.5mbps
set global download limit to 15mbps
set min per user upload limit to 150kbps
set min per user download limit to 1500kbps

If is plenty of band available, allow devices to use it. Eg: sometimes it's just one device, it should have access to all or almost all band. On the moment a second device start to use WAN, divide global limit between those two and so on.

Is that possible?

My actual pf.conf:
Code:
# Macros
########
ext_if = "em0"
int_if = "re0"  # macro for internal interface
localnet = $int_if:network



# Tables
########
table <sshguard> persist
table <blockedips> persist file "/etc/pf.blocked.conf"


# Options
#########
set skip on lo0


# Traffic Normalization
#######################
scrub in all no-df random-id max-mss 1440


# Queueing
##########
altq on $ext_if priq bandwidth 1350Kb queue { q_pri, q_def }
queue q_pri priority 7
queue q_def priority 1 priq(default)


# Translation
#############
nat on $ext_if from $localnet to any -> ($ext_if)


# Packet Filtering
##################
antispoof for $ext_if
antispoof for $int_if


#UPNP
pass out on $ext_if proto tcp from $ext_if to any flags S/SA \
        keep state queue (q_def, q_pri)

pass in  on $ext_if proto tcp from any to $ext_if flags S/SA \
        keep state queue (q_def, q_pri)
pass from { lo0, $localnet } to any keep state


Thanks in advance!
 
Have a look at the HFSC queueing algorithm. It can assign lower and upper bandwith quotas for queues and only starts shaping if one queue actually deprives another one of its guaranteed share.
A good intro to HFSC queueing can be found on calomel.org: https://calomel.org/pf_hfsc.html


A note to ALTQ in general: There is still a bug [1][2] in igb(4) (IGB_LEGACY) queue locking, which gets triggered by ALTQ on high loads and causes the box to reset. A workaround is to limit the hw-queues to 1, but this induces a heavy performance impact (~30%).
As ALTQ is enabled globally for all interfaces it doesn't matter if you only use queueing for your external interface - in fact, if you have various interfaces use the igb one(s) as external interfaces, as these usually won't get any high load (unless you have a gbit fiber connection...).

[1] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208409
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213257
 
I don't know how much of the code is relevant or equivalent in FreeBSD, but the Tomato Shibby router software has fine grained control for this purpose. There might be some inspiration there.
 
There are other posts on this forum about using IPFW for rate throttling per IP. Most proxy software can do it too ...

Edit: But, I see now that you want it to be dynamic. So, not sure about how to make it dynamic. My http proxy let's me have multiple tiers of rate throttling. But that's not really dynamic either. I see the HAProxy stuff apparently can do dynamic rate throttling, per a google search. But, know nothing about that software.
 
Guys, thanks for the tips!

Have a look at the HFSC queueing algorithm. It can assign lower and upper bandwith quotas for queues and only starts shaping if one queue actually deprives another one of its guaranteed share.
A good intro to HFSC queueing can be found on calomel.org: https://calomel.org/pf_hfsc.html


A note to ALTQ in general: There is still a bug [1][2] in igb(4) (IGB_LEGACY) queue locking, which gets triggered by ALTQ on high loads and causes the box to reset. A workaround is to limit the hw-queues to 1, but this induces a heavy performance impact (~30%).
As ALTQ is enabled globally for all interfaces it doesn't matter if you only use queueing for your external interface - in fact, if you have various interfaces use the igb one(s) as external interfaces, as these usually won't get any high load (unless you have a gbit fiber connection...).

[1] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208409
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213257

sko, I'm doing some tests using hfsc with some partial success. I did some queues and it seems to be working fine as I can see slots being filled. While I can define quotas they are meant to work globally and not per user, which allows a single user uploading a file via web to hog all bandwidth resulting in very poor (sometimes unusable) web navigation for everyone else.

This is my pf.conf:

Code:
# Macros
########
ext_if = "em0"
int_if = "re0"  # macro for internal interface
localnet = $int_if:network
tcp_services = "{ www, https, ssh }"


# Tables
########
table <sshguard> persist
#table <blockedips> persist file "/etc/pf.blocked.conf"


# Options
#########
set skip on lo0
set block-policy drop
set loginterface $ext_if


# Traffic Normalization
#######################
scrub in all no-df random-id max-mss 1440


# Queueing
##########
altq on $ext_if hfsc bandwidth 1350Kb queue {bulk, ack, dns, ssh, web, bittor}

queue ack        bandwidth 30% priority 9 qlimit 100 hfsc (realtime 25%)
queue dns        bandwidth 10% priority 8 qlimit 100 hfsc (realtime 3%)

queue ssh        bandwidth 10% priority 6 qlimit 100 hfsc (realtime 5%) {ssh_login, ssh_bulk}
queue ssh_login  bandwidth 50% priority 6 qlimit 100 hfsc
queue ssh_bulk   bandwidth 50% priority 5 qlimit 100 hfsc

queue web        bandwidth 30% priority 5 qlimit 600 hfsc (realtime  20% upperlimit 70%)
queue bulk       bandwidth 20% priority 4 qlimit 600 hfsc (realtime  10% default)


#altq on $int_if hfsc bandwidth 14899Kb queue {def_down}


# Translation
#############
nat on $ext_if from $localnet to any -> ($ext_if)


# Packet Filtering
##################
antispoof for $ext_if
antispoof for $int_if


# Filter rules
##############

block all

# sshguard
block drop in log quick on $ext_if inet from <sshguard> to any

# permite acesso externo em algumas portas
pass in inet proto tcp to $ext_if port $tcp_services flags S/SA keep state

# permite todo trafego da rede local para a interface local
pass in  on $int_if from $localnet to any keep state
pass out on $int_if from any to $localnet keep state

# permite todo o trafego de saida pela interface externa
pass out on $ext_if inet proto tcp from ($ext_if) to any flags S/SA modulate state queue (bulk, ack)
pass out on $ext_if inet proto tcp from ($ext_if) to any port 53 flags S/SA modulate state queue (dns)
pass out on $ext_if inet proto tcp from ($ext_if) to any port {www, https} flags S/SA modulate state queue (web)
pass out on $ext_if inet proto tcp from ($ext_if) to any port ssh flags S/SA modulate state queue (ssh_bulk, ssh_login)

pass out on $ext_if inet proto { udp, icmp } to any queue (bulk)

pass from { lo0, $localnet } to any keep state
 
I see the HAProxy stuff apparently can do dynamic rate throttling, per a google search. But, know nothing about that software.
HAProxy allows you to (dynamically) assign a 'weight' to a backend server. This causes traffic to be balanced differently. But more importantly, HAProxy is a reverse proxy meant to balance incoming traffic to one or more backend web servers. It's not a 'generic' HTTP proxy like Squid for example. We use HAProxy extensively to load-balance our webservers. This means we can handle much more traffic than we ever could with a single server. It also means I can take one or more of the web servers offline for maintenance without interrupting any of the websites.
 
HAProxy allows you to (dynamically) assign a 'weight' to a backend server. This causes traffic to be balanced differently. But more importantly, HAProxy is a reverse proxy meant to balance incoming traffic to one or more backend web servers. It's not a 'generic' HTTP proxy like Squid for example. We use HAProxy extensively to load-balance our webservers. This means we can handle much more traffic than we ever could with a single server. It also means I can take one or more of the web servers offline for maintenance without interrupting any of the websites.

I will give it a try. It applies to my case? I'm running an small gateway/firewall on home network.
 
It applies to my case?
No, that's the point I was trying to make. It's the exact opposite end of your case. HAProxy is really only relevant if you host your own websites. Then you can put HAProxy in front of your webservers to balance traffic to one or more servers.

What you need is something to balance out outgoing web traffic (i.e. regular users browsing the internet).
 
What you need is something to balance out outgoing web traffic (i.e. regular users browsing the internet).

Yes, this is exactly what I want to do. Actually I'm only shapping it. Would be nice if I could per user balance it.
 
Anyone knows where I can get help on this? Is there any pf forum that I could ask for help? I can't believe I'm the only person with this kind of problem needing help.
 
After reading a lot and not being able to achieve the wanted result I tried this approach:

I made a table and two queues. One queue must contain nice and conscious people while the other is dedicated to toxic bandwidth hoggers.
Code:
#table containing list of priority ips
table <niceguys_ips> persist file "/etc/pf.niceguys.conf"

...
# Queueing
##########
altq on $ext_if hfsc bandwidth 1350Kb queue { dns, ssh, mestres, sabujos }

queue dns       bandwidth 5% priority 8 qlimit 100 hfsc (realtime 3%)
queue ssh       bandwidth 10% priority 9 qlimit 100 hfsc (realtime 5%) {ssh_login, ssh_bulk}
                queue ssh_login  bandwidth 50% priority 4 qlimit 100 hfsc
                queue ssh_bulk   bandwidth 50% priority 3 qlimit 100 hfsc

queue niceguys  bandwidth 45% priority 7 qlimit 1000 hfsc { ack, web, bulk }
                queue ack        bandwidth 30% priority 5 qlimit 100 hfsc (realtime 5%)
                queue web        bandwidth 50% priority 2 qlimit 600 hfsc (realtime 30% upperlimit 70%)
                queue bulk       bandwidth 20% priority 1 qlimit 600 hfsc (realtime 10%)

queue badguys   bandwidth 40% priority 6 qlimit 600 hfsc (realtime 20% default)

Now I need some help to configure those pass out lines to use ips from nice guys table and force ips not on that table to use badguys queue. Please, I really would appreciate if anyone could kindly help me on this.

Code:
### niceguys
pass out on $ext_if inet proto tcp from ($ext_if) to any port 53 flags S/SA modulate state queue (dns)
pass out on $ext_if inet proto tcp from ($ext_if) to any port ssh flags S/SA modulate state queue (ssh_bulk, ssh_login)
pass out on $ext_if inet proto tcp from ($ext_if) to any flags S/SA modulate state queue (bulk, ack)
pass out on $ext_if inet proto tcp from ($ext_if) to any port {www, https} flags S/SA modulate state queue (web)
pass out on $ext_if inet proto { udp, icmp } from ($ext_if) to any keep state queue (bulk)

### badguys
#pass out on $ext_if inet proto tcp from ($ext_if) to any flags S/SA modulate state queue (sabujos)
 
You could try abusing overload tables and the "probability" parameter to drop a given amount of packets to slow down bandwidth-hoggers. This should work for outgoing connections as well as incoming ones.

https://forums.freebsd.org/threads/43428/


I only know of connection-rate based filters for automatic table-assignment (max-src-conn / max-src-conn-rate), but nothing about bandwidth-based filters can be found. Although you could try and abuse counters to get per-address packet statistics and an external script that monitors them and assigns hosts to a different table if they exceed a given limit.
However, if there is only a handful of users/clients who use up all bandwidth on a regular basis I'd just put them on a low-priority queue manually and call it a day.
 
You could try abusing overload tables and the "probability" parameter to drop a given amount of packets to slow down bandwidth-hoggers. This should work for outgoing connections as well as incoming ones.

https://forums.freebsd.org/threads/43428/


I only know of connection-rate based filters for automatic table-assignment (max-src-conn / max-src-conn-rate), but nothing about bandwidth-based filters can be found. Although you could try and abuse counters to get per-address packet statistics and an external script that monitors them and assigns hosts to a different table if they exceed a given limit.
However, if there is only a handful of users/clients who use up all bandwidth on a regular basis I'd just put them on a low-priority queue manually and call it a day.

Thanks. I tried something simpler and even thus didn't worked. I have to say that while I learned a lot on the past few days I'm total newbie on pf and firewalls in general.

When I try to assign my table to my pass out rules internet stop to work on both clients and the freebsd gateway itself. I can't ping to known external IPs.

I did it this way:
Code:
pass out on $ext_if inet proto tcp from <niceguys_ips> to any {www, https} flags S/SA modulate state queue (web)

Could you please clarify what I'm doing wrong?
 
Isn't pf complaining about a syntax error? I'd say your rule is missing a "port" statement just before "{www, https}".

IMHO it's also always a good idea to handle egress traffic seperately by using something like "!<localnets>" instead of "any" and tagging these connections with EGRESS:

Code:
pass out on $ext_if inet proto tcp from <niceguys_ips> to !<localnets> port {www, https} flags S/SA modulate state queue (web) tag EGRESS

If you assign this tag explicitly to any outgoing traffic, you can simplify/clarify your NAT rules:

Code:
nat on $ext_if from <localnets> to any tag EGRESS -> ($ext_if)
 
Isn't pf complaining about a syntax error? I'd say your rule is missing a "port" statement just before "{www, https}".

My ruleset is primarily set up to block inbound traffic so it's a lot different but for outbound traffic I use:

Code:
pass out on $ext_if proto { tcp, udp, icmp } from any to any modulate state
 
Hi guys! Thanks for your support!

I have tried doing this:

Table cointaining our good guys IPs:
Code:
# Tables
########
table <sshguard> persist
table <goodguys_ips> { 192.168.1.101, 192.168.1.106, 192.168.1.107, 192.168.1.164, 192.168.1.170, 192.168.1.174, 192.168.1.184, 192.168.1.196 }

My nat rule:
Code:
# Translation
#############
nat on $ext_if from $localnet to any  -> ($ext_if:0)

Two simple queue for testing:
Code:
# Queueing
##########
altq on $ext_if hfsc bandwidth 900Kb queue { mestres, sabujos }
queue mestres  bandwidth 45% qlimit 1000 hfsc (realtime 40%)
queue sabujos  bandwidth 40% qlimit 600 hfsc (realtime 40% default)

Filter rules:
Code:
# Filter rules
##############
block all
block drop in log quick on $ext_if inet from <sshguard> to any

pass from { lo0, $localnet } to any keep state

#rede interna
pass in  on $int_if from $localnet to any flags S/SA keep state tag EGRESS
pass out on $int_if from any to $localnet flags S/SA keep state

#rede externa
pass out on $ext_if inet proto { tcp, udp } from !<goodguys_ips> to any tagged EGRESS modulate state flags S/SA queue
pass out on $ext_if inet proto { tcp, udp } from  <goodguys_ips> to any tagged EGRESS modulate state flags S/SA queue
pass out on $ext_if inet proto udp all keep state



Now some debug output:

From my rules, I can see all the traffic is going to "not belong to goodguys_ips table" rule.
Code:
 # pfctl -s rules -v
scrub in all no-df random-id max-mss 1440 fragment reassemble
  [ Evaluations: 129745    Packets: 65853     Bytes: 27381479    States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
block drop in on ! em0 inet from 192.168.0.0/24 to any
  [ Evaluations: 1954      Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
block drop in on ! re0 inet from 192.168.1.0/24 to any
  [ Evaluations: 1943      Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
block drop all
  [ Evaluations: 1954      Packets: 897       Bytes: 145538      States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
block drop in log quick on em0 inet from <sshguard> to any
  [ Evaluations: 1954      Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass inet6 from ::1 to any flags S/SA keep state
  [ Evaluations: 1954      Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass on lo0 inet6 from fe80::1 to any flags S/SA keep state
  [ Evaluations: 0         Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass inet from 127.0.0.1 to any flags S/SA keep state
  [ Evaluations: 1954      Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass inet from 192.168.1.0/24 to any flags S/SA keep state
  [ Evaluations: 1954      Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass in on re0 inet from 192.168.1.0/24 to any flags S/SA keep state tag EGRESS
  [ Evaluations: 920       Packets: 33289     Bytes: 27772032    States: 180   ]
  [ Inserted: uid 0 pid 99523 State Creations: 516   ]
pass out on re0 inet from any to 192.168.1.0/24 flags S/SA keep state
  [ Evaluations: 1954      Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass out on em0 inet proto tcp from ! <goodguys_ips> to any flags S/SA modulate state queue sabujos tagged EGRESS
  [ Evaluations: 1045      Packets: 30785     Bytes: 27212202    States: 85    ]
  [ Inserted: uid 0 pid 99523 State Creations: 105   ]
pass out on em0 inet proto udp from ! <goodguys_ips> to any keep state queue sabujos tagged EGRESS
  [ Evaluations: 541       Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass out on em0 inet proto tcp from <goodguys_ips> to any flags S/SA modulate state queue mestres tagged EGRESS
  [ Evaluations: 541       Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass out on em0 inet proto udp from <goodguys_ips> to any keep state queue mestres tagged EGRESS
  [ Evaluations: 436       Packets: 0         Bytes: 0           States: 0     ]
  [ Inserted: uid 0 pid 99523 State Creations: 0     ]
pass out on em0 inet proto udp all keep state
  [ Evaluations: 541       Packets: 1716      Bytes: 414346      States: 91    ]
  [ Inserted: uid 0 pid 99523 State Creations: 436   ]

My machine IP is on goodguys_ips table. I started to upload a file, but the only queue being filled is the one linked to !<goodguys_ips> table.
Code:
queue root_em0 on em0 bandwidth 900Kb priority 0 {mestres, sabujos}
  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/ 50 ]
  [ measured:     0.0 packets/s, 0 b/s ]
queue  mestres on em0 bandwidth 405Kb qlimit 1000 hfsc( realtime 360Kb )
  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/1000 ]
  [ measured:     0.0 packets/s, 0 b/s ]
queue  sabujos on em0 bandwidth 360Kb qlimit 600 hfsc( default realtime 360Kb )
  [ pkts:      23705  bytes:    3551757  dropped pkts:      0 bytes:      0 ]
  [ qlength: 142/600 ]
  [ measured:    74.7 packets/s, 277.59Kb/s ]

Any thoughts on this? Thanks again!
 
Filter rules are "last match wins" - use pass/block quick to stop evaluation on first match.
I prefer using "quick" rules everywhere except when I really want to have some fall-through mechanism for more complex combinations of rules.
 
Sko, thanks for your fast reply.

I think I didn't explained well what's happening. Since english isn't my main tongue I will blame it on language barrier lol

I have a table with some local network addresses on it and two rules:

1) everyone not on table: from !<goodguys_ips>

2) everyone on table: from <goodguys_ips>

Since my desktop IP is listed on that table and the rule assigning it comes later, it should be generating traffic on that specific queue. It looks like those rules are somewhat wrong and no matter what is the IP address being compared with the table the result is always false.

IP == <goodguys_ips> is never happening in my opinion.

Makes sense?
 
Can you verify that pf is actually evaluating the rule for packets from your desktops IP to egress? Have a look at the state tables and grep for your IP.
Check if your table goodguys_ips is actually still loaded and not flushed - this shouldn't be the case as there are rules referring to it, but I mark all my manual tables as "persist" just to prevent any surprises...

Also try using "quick" rules and put the rule for "goodguys" and queue "mestres" before the rule for "not goodguys" and queue "sabujos". Even if it's just to verify that the rules aren't evaluated as intended/expected.

I really can't spot a specific cause for this behaviour, but as "sabujos" is your default queue, I'd suspect your rule for using the "mestres" queue isn't fully matched and packets falls through to the default queue.

Another suspicion would be that these rules for $ext_if are matched with NAT already applied - so your "pass out on $ext_if" rules are matched with the gateways external IP already applied as the source.
What happens if you change the rules to
Code:
pass quick proto { tcp, udp } from  <goodguys_ips> to any tagged EGRESS modulate state flags S/SA queue mestres
pass quick proto { tcp, udp } from !<goodguys_ips> to any tagged EGRESS modulate state flags S/SA queue sabujos
?
 
Back
Top