From what we learn, TCP can and will adapt to the bandwidth by the window/ACK mechanism. So if there is a bandwidth limit, we only need to delay the packets, and TCP will automatically adjust to it.
But in UDP there is no such thing. So when the packets are delayed, they will pile up, and we cannot do anything else than throw them away. There is no way to tell the connection to slow down, because there is no notion of a connection.
So, when using a dummynet pipe to limit bandwidth, I keep getting dropped packets and such ugly messages:
And it seems there is nothing that can be done about.
But then, how is this done on normal network connections? There are fast ethernet connections and slow serial ones, how does the system know to feed them with appropriate rates?
Or, more complex: my internet connection is PPPoE. The ppp daemon encapsulates the data and puts it onto an ethernet, that runs 100Mbit/sec. Then comes a DSL modem, and the DSL bandwidth is only about 2Mbit/sec. If the system would put out the data onto ethernet in full possible speed, the DSL modem would need to throw away 98% of the packets. It obviousely doesn't. But why?
But in UDP there is no such thing. So when the packets are delayed, they will pile up, and we cannot do anything else than throw them away. There is no way to tell the connection to slow down, because there is no notion of a connection.
So, when using a dummynet pipe to limit bandwidth, I keep getting dropped packets and such ugly messages:
Code:
openvpn[86893]: write UDP: No buffer space available (code=55)
But then, how is this done on normal network connections? There are fast ethernet connections and slow serial ones, how does the system know to feed them with appropriate rates?
Or, more complex: my internet connection is PPPoE. The ppp daemon encapsulates the data and puts it onto an ethernet, that runs 100Mbit/sec. Then comes a DSL modem, and the DSL bandwidth is only about 2Mbit/sec. If the system would put out the data onto ethernet in full possible speed, the DSL modem would need to throw away 98% of the packets. It obviousely doesn't. But why?