PF: Traffic Shaping for an ISP

Hello everyone,

I'm working on new traffic shaper, firewall generator based on pf queues. The situation in simplest case looks like that for single router/firewall:
- one WAN interface (either VLAN or PHY)
- one LAN interface, but mostly with multiple VLANs
- users have upperlimit for upload and download
- OS : FreeBSD 7.3, but actually I'm moving to 8.2, so if 7.3 lacks in some functionality that is not a big deal.:e

So, to make it clearer,
1. What is better for that situation CBQ or HFSC ?
2. I've heard that queues actually working only on PHY, so i put rule altq on phy_interface. Is that true?
3. It happens that whole sum of all queues is bigger than the actual bandwidth of interface (1Gb) like 3,2Gb. In that situation , at least in CBQ, pfctl rise a warning. So should I declare bandwidth higher ? like 4Gb. I know that situation is quite disturbing, but according to stats the whole load is never bigger than 500Mbps in prime time.
4. Some clients have multiple termination points, so they must have subqueue for their own traffic, still in the same bandwidth, but without hardlimits between them.

I was thinking about something like that (in case of upload, right, because we can only shape incoming traffic for an interface):
Code:
altq on wan0 hfsc { client1, client2 }
queue client1 hfsc (upperlimit 1Mb)
queue client2 hfsc (upperlimit 2Mb) { client2_term1, client2_term2 }
   queue client2_term1 hfsc (50% realtime)
   queue client2_term2 hfsc (50% realtime)

So I got queues declared. Now, the trouble is some clients need to be behind NAT (public IP is an additional service).
So I could create table of IP, IP/CIDR with private adresses (kept in file)
then
Code:
nat on wan0 from <clients> to ! <ournetwork>

later I make the rule to let them pass through
Code:
pass in on lan0 from 10.x.x.x to !<ournetwork> keep state queue client1
Is that correct ?
Do I need to make passes like these for each client? If that so, damn, it makes pf.conf so big :e

I'd really appreciate any suggestion. I've read the whole "Book of PF", so I received quite the lessons but still need some guidance and I'm open for discussion.;)
 
Traffic shaping box should be simple.

I think it's better if you first make a FreeBSD router to do all the routing and nat with as many LAN cards you want. Then set up a FreeBSD or OpenBSD bridge between your clients and router to do all the traffic shaping using PF.

Regards
usman
 
mastier said:
I was thinking about something like that (in case of upload, right, because we can only shape incoming traffic for an interface)

To nip this in the bud: you can only queue outbound traffic (as seen from the kernel outwards). So you will have to shape outbound (Internet-bound) traffic on the WAN nic, and inbound traffic (client-bound traffic) on the LAN nic.
 
DutchDaemon said:
To nip this in the bud: you can only queue outbound traffic (as seen from the kernel outwards). So you will have to shape outbound (Internet-bound) traffic on the WAN nic, and inbound traffic (client-bound traffic) on the LAN nic.

Yeah, you are right. So if I'd like to shape upload I need to do ALTQ on WAN interface, but download on LAN interface.


osman said:
Traffic shaping box should be simple.

I think it's better if you first make a FreeBSD router to do all the routing and nat with as many LAN cards you want. Then set up a FreeBSD or OpenBSD bridge between your clients and router to do all the traffic shaping using PF.

Regards
usman
That might be not bad idea but I must have two machines then.
 
Ok, thanks again, so if I have one interface and vlans on it. I can just separate them by proper pass in and pass out rules, right? Like:

Code:
altq on em0 hfsc bandwidth 100Mb queue { luserD,LuserU }
queue luserU bandwidth 50% hfsc(upperlimit 10Mb default)
queue luserD bandwidth 50% hfsc(upperlimit 10Mb)

pass in on em0 from 10.1.1.100 to !<ournet> keep state queue luserU
pass out on em0 from !<ournet> to 10.1.1.100 keep state queue luserD

luserD and luserU stands for Download and Upload.
 
Damn, I still have troubles in getting that to work. It doesn't seems to work at work, even with plain configuration, no vlans, wan & lan interface. To be perfectly clear, I want to hard-limit the connection for set speed.

Code:
ext_if="wan0"
int_if="lan0"

nat_addr="10.9.222.1"
table <int_net> { 10.7.253.0/25 }
table <nasze> { 79.110.190.0/20 }

set limit { states 300000, frags 5000 }
set loginterface none
set optimization normal
set block-policy drop
set require-order yes
set fingerprints "/etc/pf.os"

altq on $int_if cbq bandwidth 100Mb queue { down_q, qdef }
queue down_q bandwidth 5Mb cbq(red)
queue qdef bandwidth 5Mb cbq(default red)

nat on $ext_if from <int_net> to ! <nasze> -> $nat_addr

pass out quick from any to 10.7.253.126 queue down_q
pass out quick from any to 79.110.190.14 queue down_q

Downloading to that gate address still doesn't seem to work, traffic is not shaped. Any ideas?

Code:
# wget -O /dev/null --bind-address=10.7.253.126 ftp://ftp.icm.edu.pl/ls-lR.old.gz
--2011-03-27 00:38:36--  ftp://ftp.icm.edu.pl/ls-lR.old.gz
           => `/dev/null'
Translacja ftp.icm.edu.pl... 193.219.28.140
Łączenie się z ftp.icm.edu.pl|193.219.28.140|:21... połączono.
Logowanie siÄ™ jako anonymous ... Zalogowano siÄ™!
==> SYST ... zrobiono.    ==> PWD ... zrobiono.
==> TYPE I ... zrobiono.  ==> CWD nie jest potrzebne.
==> SIZE ls-lR.old.gz ... 161309181
==> PASV ... zrobiono.    ==> RETR ls-lR.old.gz ... zrobiono.
Długość: 161309181 (154M) (nie autorytatywne)

 2% [>                                                                     ] 3 400 504   2,69M/s

The same from public address.79.110.190.14

Code:
# wget -O /dev/null --bind-address=79.110.190.14 ftp://ftp.icm.edu.pl/ls-lR.old.gz
--2011-03-27 00:44:16--  ftp://ftp.icm.edu.pl/ls-lR.old.gz
           => `/dev/null'
Translacja ftp.icm.edu.pl... 193.219.28.140
Łączenie się z ftp.icm.edu.pl|193.219.28.140|:21... połączono.
Logowanie siÄ™ jako anonymous ... Zalogowano siÄ™!
==> SYST ... zrobiono.    ==> PWD ... zrobiono.
==> TYPE I ... zrobiono.  ==> CWD nie jest potrzebne.
==> SIZE ls-lR.old.gz ... 161309181
==> PASV ... zrobiono.    ==> RETR ls-lR.old.gz ... zrobiono.
Długość: 161309181 (154M) (nie autorytatywne)

 8% [=====>                                                                ] 13 877 488  3,17M/s  eta 48s
 
Run [cmd=]pfctl -sq -vv[/cmd] and test further. See if stuff ends up in queues, and if not, experiment. It's very difficult to read other people's rulesets out of context.
 
Code:
ifconfig_fxp0_name="wan0"
ifconfig_fxp1_name="lan0"

Code:
wan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=9<RXCSUM,VLAN_MTU>
	ether 00:30:05:03:90:b1
	inet 10.8.46.252 netmask 0xffffff00 broadcast 10.8.46.255
	inet6 fe80::230:5ff:fe03:90b1%wan0 prefixlen 64 scopeid 0x1 
	inet 10.9.222.1 netmask 0xfffffff0 broadcast 10.9.222.15
	media: Ethernet autoselect (100baseTX <full-duplex>)
	status: active
lan0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=2009<RXCSUM,VLAN_MTU,WOL_MAGIC>
	ether 00:02:b3:22:54:2b
	inet 10.7.253.126 netmask 0xffffff80 broadcast 10.7.253.127
	inet6 fe80::202:b3ff:fe22:542b%lan0 prefixlen 64 scopeid 0x2 
	inet 79.110.190.14 netmask 0xfffffff8 broadcast 79.110.190.15
	media: Ethernet autoselect (100baseTX <full-duplex>)
	status: active
plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
	inet6 ::1 prefixlen 128 
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4 
	inet 127.0.0.1 netmask 0xff000000

These cards are fxp one, quite old. This a test router ;-] But I tried this also on other with em, but unfortunately it's one card system (as mentioned i.e. above). Both are FreeBSD 7.3.

All trafic seems to go to default queue, I even set from any to any rule

Code:
ext_if="wan0" 
int_if="lan0"

Code:
redirected_nets="{10.7.253.120/29}"
nat_addr="10.9.222.1"
table <int_net> { 10.7.253.0/25 } 
table <ournet> { 79.110.192.0/20 193.239.56.0/22 }
table <debtors>

set limit { states 300000, frags 5000 }
set loginterface none
set optimization normal
set block-policy drop
set require-order yes
set fingerprints "/etc/pf.os"

altq on $int_if cbq bandwidth 100Mb queue { down_q, qdef }
queue down_q bandwidth 5Mb cbq(red)
queue qdef bandwidth 5Mb cbq(default red)

nat on $ext_if from <int_net> to ! <ournet> -> $nat_addr

pass quick from any to any queue down_q

Oh! Then some traffic to that queue but still the left the speed with wget is about 3MB/s!

Code:
queue root_lan0 on lan0 bandwidth 100Mb priority 0 cbq( wrr root ) {down_q, qdef}
  [ pkts:       1828  bytes:     838665  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:    80.5 packets/s, 326.71Kb/s ]
queue  down_q on lan0 bandwidth 5Mb cbq( red ) 
  [ pkts:       1739  bytes:     833526  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:    77.2 packets/s, 325.24Kb/s ]
queue  qdef on lan0 bandwidth 5Mb cbq( red default ) 
  [ pkts:         89  bytes:       5139  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:     3.3 packets/s, 1.47Kb/s ]




EDIT: OK, it seems that the traffic must be actually routed. The topic is considered solved, but the vlan issue continues here :
http://forums.freebsd.org/showthread.php?t=22855
 
Back
Top