IPFW traffic shaping question

I've a traffic shaping question. Would it be possible to restrict bandwidth via ipfw and dummynet after a defined time lapse?

For example, if I have a server that is writing to a machine at 4Gbps, I want to allow that so long as it is within the first 60 seconds, thereafter the traffic shaper would kick in and reduce that to say, 2Gbps.
Would that be possible?

Thanks in advance :).
 
As a bit of context, the ipfw rules would be written on the receiving server rather than on the transferring machine.
 
As far as I understood traffic shaping you can only throttle outgoing traffic, not incoming.
 
Thanks SirDice
My question is badly phrased. Apologies, it should be the other way around, a server writing to a machine at 4Gbps. Are there rules or examples of scenarios where the traffic shaper only starts restricting traffic after a time lapse or even a pre defined amount of data that has been transferred.
 
Nothing springs to mind. If I understand correctly you want the server to send at max speed at first, and if this is sustained longer than 60 seconds it should start throttling? Might need to script something around the firewall rules, I'm trying to figure out how to detect and keep track of this. So far I haven't gotten anywhere near a potential solution.
 
** comment on the issue of the OP follows further below **

As far as I understood traffic shaping you can only throttle outgoing traffic, not incoming.
Hehe, that's cute. ;) What precisely is outgoing traffic? This statement involves a few things which I did not manage to fully understand...

And another related question: how is flow-control maintaned in IP traffic?

Then, some ideas:
ipfw has no notion of "outgoing traffic". It only knows traffic locally-generated, locally-reveived, or routed-through.
And, when employing genuine traffic-shaping, one would usually do that on the uplink router. From the viewpoint of that uplink router, there are a few hops to one side (into the LAN), and a few hops to the other side (including the uplink, the internet and the cloud). Is there any perceivable difference (technically, not logically)?

Then, naturally thinking: if we try to regulate a flow, it does only make sense to do that at the source (limit the outflux), it does not make sense to do it at the destination (as it would just spill over on the way).
But is this also true for IP traffic?

Then, some findings:
My netif says:
Code:
        media: Ethernet autoselect <flowcontrol> (1000baseT <full-duplex,flowcontrol,rxpause,txpause>)
Apparently it does some kind of flow-control. But how does that relate, and to what?

Then, when my site pushes data to an uplink (via some PPPoE/whatever), it seems to somehow magically manage that lower bandwidth. But, when I put a dummynet traffic-shaper before the uplink, I get UDP: No buffer space (can be suppressed with noerror as queue-parameter).

With TCP this does not happen, because there is a congestion-control algorithm, and that takes action - happening on the very endpoints of the flow. (And getting this to nicely interact with traffic-shaping is an endeavour in itself.) But then, if the cc algo is the only thing that manages flow-control, then there is no difference between incoming and outgoing (from/to the site).

So, there is a bunch of questionmarks, which I wasn't able to fully resolve.


Now back to the original question:

I haven't noticed anything prepackaged to solve exactly this demand. But there is an option in the dummynet ipfw manual (see there for details):
burst size
If the data to be sent exceeds the pipe's bandwidth limit (and
the pipe was previously idle), up to size bytes of data are
allowed to bypass the dummynet scheduler, and will be sent as
fast as the physical link allows. Any additional data will be
transmitted at the rate specified by the pipe bandwidth.

This may or may not suit the matter. I do not know if it is capable of the desired amount of data (60 sec at a substantial bandwidth), and also it is a parameter specific to a pipe, not specific to individual machines sending over a pipe (if the number of concerned machines is low, multiple pipes might be tried - not sure about their cost).

Otherwise, it is certainly possible to change ipfw rules dynamically, or -more practical- to switch ipfw table contents dynamially. (It should also be possible to switch pipe bandwidth dynamically, but when trying that, I usually get a message qfq_dequeue BUG/* non-workconserving leaf followed by a kernel panic - should be looked into on occasion).

Given that, it would basically be a task of somehow monitoring and counting the actual consumtion and then taking appropriate action, i.e. a bit of development.
 
Back
Top