Hello to any1
I use a FreeBSD 6.4-STABLE, Squid and a ipfw rule to make a traffic shaper on proxy users.
I'm not a "guru" but i have a question about the performance.
Well, i use "nload" to check bandwidth incoming/outgoing, is very simple to use, i like too much.
Okay, let's explain, the box has two nics, obviously squid listening on LAN and on WAN we have a 512 k/bps connection, 512 k/bps on upload and 512 k/bps on download.
The ipfw rule that i use is:
Well, as you can see, i did cut to 256 k/bps the download bandwidth and cutted to 128 k/bps the upstream.
It seems to work, but with a check with nload on the WAN nic, i can see a lots of peaks about 500-512 in incoming graph, infact, that is the maximum bandwidth that the line can reach. This annoing peaks upset the regular bandwidth flow.
The download on clients, it is about 240-250 k/bps, but the bandwidth is non very linear.
See the image below about nload on wan interface with this setting:
Well, i thought that was a problem of my nics, nor, my provider, but i made a little change on pipe configuration, instead make a low bandwidth againts maximum from my ips, i raised the value as follow (see download and upload variables):
Well, with this setting, i provide full bandwidth to clients, but the flow is regular and linear, see image below:
Here we are with the question:
It is normal to have this peaks? Can i solve this matter to have bandwidth control and a regular flow?
I made an incorrect configuration about pipe?
Thanx in advance for your help.
I use a FreeBSD 6.4-STABLE, Squid and a ipfw rule to make a traffic shaper on proxy users.
I'm not a "guru" but i have a question about the performance.
Well, i use "nload" to check bandwidth incoming/outgoing, is very simple to use, i like too much.
Okay, let's explain, the box has two nics, obviously squid listening on LAN and on WAN we have a 512 k/bps connection, 512 k/bps on upload and 512 k/bps on download.
The ipfw rule that i use is:
Code:
mynet="192.168.33.0/24"
ethlan="192.168.33.1"
download="256Kbit/s"
upload="128Kbit/s"
ipfw add pipe 1 ip from ${ethlan} 3128 to ${mynet}
ipfw add pipe 2 ip from ${mynet} to ${ethlan} 3128
ipfw pipe 1 config bw ${download}
ipfw pipe 2 config bw ${upload}
Well, as you can see, i did cut to 256 k/bps the download bandwidth and cutted to 128 k/bps the upstream.
It seems to work, but with a check with nload on the WAN nic, i can see a lots of peaks about 500-512 in incoming graph, infact, that is the maximum bandwidth that the line can reach. This annoing peaks upset the regular bandwidth flow.
The download on clients, it is about 240-250 k/bps, but the bandwidth is non very linear.
See the image below about nload on wan interface with this setting:
Well, i thought that was a problem of my nics, nor, my provider, but i made a little change on pipe configuration, instead make a low bandwidth againts maximum from my ips, i raised the value as follow (see download and upload variables):
Code:
mynet="192.168.33.0/24"
ethlan="192.168.33.1"
download="800Kbit/s"
upload="800Kbit/s"
ipfw add pipe 1 ip from ${ethlan} 3128 to ${mynet}
ipfw add pipe 2 ip from ${mynet} to ${ethlan} 3128
ipfw pipe 1 config bw ${download}
ipfw pipe 2 config bw ${upload}
Well, with this setting, i provide full bandwidth to clients, but the flow is regular and linear, see image below:
Here we are with the question:
It is normal to have this peaks? Can i solve this matter to have bandwidth control and a regular flow?
I made an incorrect configuration about pipe?
Thanx in advance for your help.