Polymer blocks pave the way for more sustainable building - ScienceDaily

Algorithms designed to ensure that many users share a network cannot prevent some users from using all the bandwidth. Daily flag

When users want to send data over the Internet faster than the network can handle, congestion can occur — in the same way that traffic jams disrupt their morning commute to a big city.

Computers and devices that transmit data over the Internet break the data into smaller packets and use a special algorithm to determine how quickly those packets can be transmitted. These congestion control algorithms seek to discover and make full use of the available network capacity while equitably sharing it with other users who may share the same network. These algorithms attempt to reduce the delay caused by waiting for data in queues in the network.

Over the past decade, researchers in industry and academia have developed many algorithms that attempt to achieve high rates while controlling delays. Some of this algorithm, such as the BBR algorithm developed by Google, is now widely used in many websites and applications.

But a team of MIT researchers has discovered that these algorithms can be very unfair. In a new study, they show that there will always be a network scenario where at least one transmitter receives almost no bandwidth compared to other transmitters; This means that a problem known as starvation cannot be avoided.

“What is really surprising about this paper and the results is that when you take into account the realistic complexity of network paths and all the things that can be done to packets of data, it is essentially impossible to avoid congestion control algorithms to control the delay,” says Mohammad Alizadeh, associate professor of electrical engineering. and Computer Science (EECS): “Starvation Using Current Methods.”

While Alizadeh and colleagues have been unable to find a traditional congestion-control algorithm that can avoid starvation, there may be algorithms in a different class that can prevent this problem. Their analysis also suggests that changing the way these algorithms work, allowing for larger changes in delay, could help prevent starvation in some network situations.

Alizadeh wrote the research with first author and EECS graduate student Venkat Arun and senior author Harry Balakrishnan, Fujitsu Professor of Computer Science and Artificial Intelligence. The research will be presented at the ACM Special Interest Group Conference on Data Communications (SIGCOMM).

congestion control

Congestion control is a fundamental networking problem that researchers have been trying to address since the 1980s.

The user’s computer does not know how quickly data packets are sent over the network because it lacks information, such as the quality of the network connection or the number of other senders using the network. Sending packets too slowly causes the available bandwidth to be misused. But sending it too fast can overwhelm the network, and in doing so, it will start dropping packets. These packets should be resent, which results in longer delays. Delays can also occur due to packets waiting in queues for a long time.

Congestion control algorithms use packet losses and delays as signals to infer congestion and determine how fast the data can be transmitted. But the Internet is complex, and packets can be delayed and lost for reasons unrelated to network congestion. For example, data can be held in a queue along the way and then released with a group of other packets, or the recipient’s acknowledgment may be delayed. The authors call delays that are not caused by congestion “shivering.”

Even if the congestion control algorithm measures the delay completely, it cannot distinguish between congestion-induced delay and jitter-induced delay. The delay caused by jitter is unpredictable and confuses the sender. Because of this ambiguity, users begin to estimate the delay differently, which causes them to send packets at unequal rates. Ultimately, Aaron explains, this leads to a situation in which starvation occurs and a person is shut down completely.

“We started the project because we lacked a theoretical understanding of crowd control behavior in the presence of stress. To put it on a more stable theoretical foundation, we built a mathematical model that was simple enough to think about, but able to capture some of the intricacies of the Internet. It was very rewarding to tell us Mathematics with things we didn’t know that have a practical relevance,” he says.

Hunger Study

The researchers fed their mathematical model to a computer, gave it a series of commonly used crowding control algorithms, and asked the computer to find an algorithm that could avoid starvation using their model.

“We couldn’t do it. We tried every algorithm we knew, some new algorithms we invented. Nothing worked. The computer always found a situation where some people get all of the bandwidth and at least one person doesn’t get anything,” says Aaron.

The researchers were surprised by this finding, especially since these algorithms are widely believed to be reasonably fair. They began to suspect that it might not be possible to avoid famine, an extreme form of injustice. This prompted them to define a class of algorithms they call “convergent delay algorithms” that they proved would always starve under their network model. All current congestion control algorithms that control delay (which the researchers are aware of) are convergent.

Aaron adds that the fact that the simple failure modes of these widely used algorithms have remained unknown for so long demonstrates how difficult it is to understand algorithms through empirical testing alone. It stresses the importance of a solid theoretical foundation.

But hope is still lost. While all the algorithms tested failed, there may be other non-convergent algorithms that might be able to avoid starvation. This suggests that one way to solve the problem might be to design congestion control algorithms that change the delay range more widely, so the The range is greater than any delay that may occur due to network instability.

“To control delays, the algorithms have also tried to correlate delay differences around the desired balance, but there is nothing wrong with creating a larger delay contrast to get better measurements of congestive delay. It is just a new design philosophy that you should embrace,” Balakrishnan adds.

Now, researchers want to keep pushing to see if they can find or build an algorithm that would eliminate starvation. They also want to apply this approach of mathematical modeling and computational proofs to other thorny unsolved problems in grid systems.

“We are increasingly relying on computer systems for critical things, and we need to put their reliability on a more stable conceptual foundation. We’ve shown the amazing things you can discover when you put in the time to come up with these official specifications what the problem actually is,” says Alizadeh.


Leave a Comment

Your email address will not be published.