Balanced Multipath Transport Protocol for Mitigating MPTCP Incast in Data Center Networks

Author(s):  
Mahendra Suryavanshi ◽  
Dr. Ajay Kumar ◽  
Dr. Jyoti Yadav

Recent data centers provide dense inter-connectivity between each pair of servers through multiple paths. These data centers offer high aggregate bandwidth and robustness by using multiple paths simultaneously. Multipath TCP (MPTCP) protocol is developed for improving throughput, fairly sharing network link capacity and providing robustness during path failure by utilizing multiple paths over multi-homed data center networks. Running MPTCP protocol for latency-sensitive rack-local short flows with many-to-one communication pattern at the access layer of multi-homed data center networks creates MPTCP incast problem. In this paper, Balanced Multipath TCP (BMPTCP) protocol is proposed to mitigate MPTCP incast problem in multi-homed data center networks. BMPTCP is a window-based congestion control protocol that prevents constant growth of each worker’s subflow congestion window size. BMPTCP computes identical congestion window size for all concurrent subflows by considering bottleneck Top of Rack (ToR) switch buffer size and increasing count of concurrently transmitting workers. This helps BMPTCP to avoid timeout events due to full window loss at ToR switch. Based on current congestion situation at ToR switches, BMPTCP adjust transmission rates of each worker’s subflow so that total amount of data transmitted by all concurrent subflows does not overflow bottleneck ToR switch buffer. Simulation results show that BMPTCP effectively alleviates MPTCP incast. It improves goodput, reduces flow completion time as compared to existing MPTCP and EW-MPTCP protocols.

2018 ◽  
Vol 7 (3.12) ◽  
pp. 19
Author(s):  
Amitkumar J. Nayak ◽  
Amit P. Ganatra

Today, there is a generalized standard usage of internet for all.The devices via multiple technologies that facilitates to provide few communication methods to scholars to work with. By forming multiple paths in the data center network, latest generation data centers offer maximum bandwidth with robustness. To utilize this bandwidth, it is necessary that different data flows take separate paths. In brief, a single-path transport seems inappropriate for such networks. By using Multipath TCP, we must reconsider data center networks, with a diverse approach as to the association between topology, transport protocols, routing. Multipath TCP allows certain topologies that single path TCP cannot use. In newer generation data centers, Multipath TCP is already deployable using extensively deployed technologies such as Equal-cost multipath routing. But, major benefits will come when data centers are specifically designed for multipath transports. Due to manifold of technologies like Cloud computing, social networking, and information networks there is a need to deploy the number of large data centers. While Transport Control Protocol is the leading Layer-3 transport protocol in data center networks, the operating conditions like high bandwidth, small-buffered switches, and traffic patterns causes poor performance of TCP.  Data Center TCP algorithm has newly been anticipated as a TCP option for data centers which address these limitations. It is worth noting that traditional TCP protocol.  


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 31782-31790 ◽  
Author(s):  
Jin Ye ◽  
Luting Feng ◽  
Ziqi Xie ◽  
Jiawei Huang ◽  
Xiaohuan Li

2013 ◽  
Vol 462-463 ◽  
pp. 1028-1035
Author(s):  
Hang Xing Wu ◽  
Xiao Long Yang ◽  
Min Zhang

Nowadays data centers have been becoming increasingly important for various web applications, huge amount of cost is invested to maintain good performance of data center. Whereas some studies indicated that TCP Incast phenomenon was widely observed in most of data centers, which results in congestion in data centers and damages the performance of data center greatly. Thus some congestion control mechanisms for data center have been proposed to solve the problems. These mechanisms are categorized and described in this paper, and the advantages and disadvantages of these mechanisms are analyzed. Subsequently, some new interesting topics which may be worthy of further study in congestion control mechanism on data center are presented.


2019 ◽  
Vol 29 (02) ◽  
pp. 1950007 ◽  
Author(s):  
Chen Hao ◽  
Weihua Yang

The generalized [Formula: see text]-connectivity of a graph [Formula: see text] is a parameter that can measure the reliability of a network [Formula: see text] to connect any [Formula: see text] vertices in [Formula: see text], which is a generalization of traditional connectivity. Let [Formula: see text] and [Formula: see text] denote the maximum number [Formula: see text] of edge-disjoint trees [Formula: see text] in [Formula: see text] such that [Formula: see text] for any [Formula: see text] and [Formula: see text]. For an integer [Formula: see text] with [Formula: see text], the generalized [Formula: see text]-connectivity of a graph [Formula: see text] is defined as [Formula: see text] and [Formula: see text]. Data centers are essential to the business of companies such as Google, Amazon, Facebook and Microsoft et al. Based on data centers, the data center networks [Formula: see text], introduced by Guo et al. in 2008, have many desirable properties. In this paper, we study the generalized [Formula: see text]-connectivity of [Formula: see text] and show that [Formula: see text] for [Formula: see text] and [Formula: see text].


Author(s):  
Ahmad Nahar Quttoum

Today’s data center networks employ expensive networking equipments in associated structures that were not designed to meet the increasing requirements of the current large-scale data center services. Limitations that vary between reliability, resource utilization, and high costs are challenging. The era of cloud computing represents a promise to enable large-scale data centers. Computing platforms of such cloud service data centers consist of large number of commodity low-price servers that, with a theme of virtualization on top, can meet the performance of the expensive high-level servers at only a fraction of the price. Recently, the research in data center networks started to evolve rapidly. This opened the path for addressing many of its design and management challenges, these like scalability, reliability, bandwidth capacities, virtual machines’ migration, and cost. Bandwidth resource fragmentation limits the network agility, and leads to low utilization rates, not only for the bandwidth resources, but also for the servers that run the applications. With Traffic Engineering methods, managers of such networks can adapt for rapid changes in the network traffic among their servers, this can help to provide better resource utilization and lower costs. The market is going through exciting changes, and the need to run demanding-scale services drives the work toward cloud networks. These networks that are enabled by the notation of autonomic management, and the availability of commodity low-price network equipments. This work provides the readers with a survey that presents the management challenges, design and operational constraints of the cloud-service data center networks


The data center networks encompass various cloud services. Network congestion and network load imbalance may occur in data center networks due to elephant flows. In order to improve the throughput and overall utilization of the network, a dynamic load balancing mechanism has to be in place. Software Defined Networking (SDN) is used to perform the balancing of the network load. SDN can obtain the global view of the network and hence contain the status and topology of the entire data center network. The elephant flows can be split and send to multiple paths based on the current state of the network. The described idea is implemented in the OpenFlow environment and tested for improvement. The result shows the enhancement in throughput and network utilization.


2020 ◽  
Vol 30 (02) ◽  
pp. 2050010
Author(s):  
Bo Zhu ◽  
Tianlong Ma ◽  
Shuangshuang Zhang ◽  
He Zhang

An edge subset [Formula: see text] of [Formula: see text] is a fractional matching preclusion set (FMP set for short) if [Formula: see text] has no fractional perfect matchings. The fractional matching preclusion number (FMP number for short) of [Formula: see text], denoted by [Formula: see text], is the minimum size of FMP sets of [Formula: see text]. A set [Formula: see text] of edges and vertices of [Formula: see text] is a fractional strong matching preclusion set (FSMP set for short) if [Formula: see text] has no fractional perfect matchings. The fractional strong matching preclusion number (FSMP number for short) of [Formula: see text], denoted by [Formula: see text], is the minimum size of FSMP sets of [Formula: see text]. Data center networks have been proposed for data centers as a server-centric interconnection network structure, which can support millions of servers with high network capacity by only using commodity switches. In this paper, we obtain the FMP number and the FSMP number for data center networks [Formula: see text], and show that [Formula: see text] for [Formula: see text], [Formula: see text] and [Formula: see text] for [Formula: see text], [Formula: see text]. In addition, all the optimal fractional strong matching preclusion sets of these graphs are categorized.


Sign in / Sign up

Export Citation Format

Share Document