The idea of conservation of packets was brought for a connection in equilibrium, in which a new packet isn't put into the network until an old one leaves. Therefore, for the packet conservation to fail, there are three possible ways and solutions:
- Connection doesn't get to equilibrium: by transitive of equality, if the first packets are sent only in responsive to an ack, the sender's packet spacing time will match on the slowest link in the path, this is also called self clocking. A slow-start algorithm is implemented by sending one single packet to start with and adjust the window according to the time between two ack packets arrive.
- Sender injects a new packet before an old one has existed: a good round-trip time estimator is the single most factor of any protocol implementation for a heavily loaded network. The authors developed a method to estimate variations and the resulting re-transimit timer derived from RFC793.
- The equilibrium can't be reached because of resource limits along the path: if timers are in good shape, timeout indicates a lost packet and not a broken timer. Packet lost is more likely to result because network is congested and there's insufficient buffer along the path, so a congestion avoidance strategy should be adopted. Multiplicative decrease (cut window in half), addtive increase (increase cwnd) on each ack of new data, and sending min advertised windows cwnd are the three lines of code to avoid congestion.
Comment: It is good to read a self-explanatory paper that gives you all the detailed explanations behind each steps, while still keeping the code simple. Cal Rocks! haha. Seriously though, it wasn't easy to identy the problems that caused network congestion, particularly from the protocol implementation perspective. And I really liked the figures which include packet size, timing and bandwidth, all in a 2D diagram.
No comments:
Post a Comment