Fractional Matching Preclusion for Data Center Networks

2020 ◽  
Vol 30 (02) ◽  
pp. 2050010
Author(s):  
Bo Zhu ◽  
Tianlong Ma ◽  
Shuangshuang Zhang ◽  
He Zhang

An edge subset [Formula: see text] of [Formula: see text] is a fractional matching preclusion set (FMP set for short) if [Formula: see text] has no fractional perfect matchings. The fractional matching preclusion number (FMP number for short) of [Formula: see text], denoted by [Formula: see text], is the minimum size of FMP sets of [Formula: see text]. A set [Formula: see text] of edges and vertices of [Formula: see text] is a fractional strong matching preclusion set (FSMP set for short) if [Formula: see text] has no fractional perfect matchings. The fractional strong matching preclusion number (FSMP number for short) of [Formula: see text], denoted by [Formula: see text], is the minimum size of FSMP sets of [Formula: see text]. Data center networks have been proposed for data centers as a server-centric interconnection network structure, which can support millions of servers with high network capacity by only using commodity switches. In this paper, we obtain the FMP number and the FSMP number for data center networks [Formula: see text], and show that [Formula: see text] for [Formula: see text], [Formula: see text] and [Formula: see text] for [Formula: see text], [Formula: see text]. In addition, all the optimal fractional strong matching preclusion sets of these graphs are categorized.

2019 ◽  
Vol 29 (02) ◽  
pp. 1950007 ◽  
Author(s):  
Chen Hao ◽  
Weihua Yang

The generalized [Formula: see text]-connectivity of a graph [Formula: see text] is a parameter that can measure the reliability of a network [Formula: see text] to connect any [Formula: see text] vertices in [Formula: see text], which is a generalization of traditional connectivity. Let [Formula: see text] and [Formula: see text] denote the maximum number [Formula: see text] of edge-disjoint trees [Formula: see text] in [Formula: see text] such that [Formula: see text] for any [Formula: see text] and [Formula: see text]. For an integer [Formula: see text] with [Formula: see text], the generalized [Formula: see text]-connectivity of a graph [Formula: see text] is defined as [Formula: see text] and [Formula: see text]. Data centers are essential to the business of companies such as Google, Amazon, Facebook and Microsoft et al. Based on data centers, the data center networks [Formula: see text], introduced by Guo et al. in 2008, have many desirable properties. In this paper, we study the generalized [Formula: see text]-connectivity of [Formula: see text] and show that [Formula: see text] for [Formula: see text] and [Formula: see text].


Author(s):  
Mahendra Suryavanshi ◽  
Dr. Ajay Kumar ◽  
Dr. Jyoti Yadav

Recent data centers provide dense inter-connectivity between each pair of servers through multiple paths. These data centers offer high aggregate bandwidth and robustness by using multiple paths simultaneously. Multipath TCP (MPTCP) protocol is developed for improving throughput, fairly sharing network link capacity and providing robustness during path failure by utilizing multiple paths over multi-homed data center networks. Running MPTCP protocol for latency-sensitive rack-local short flows with many-to-one communication pattern at the access layer of multi-homed data center networks creates MPTCP incast problem. In this paper, Balanced Multipath TCP (BMPTCP) protocol is proposed to mitigate MPTCP incast problem in multi-homed data center networks. BMPTCP is a window-based congestion control protocol that prevents constant growth of each worker’s subflow congestion window size. BMPTCP computes identical congestion window size for all concurrent subflows by considering bottleneck Top of Rack (ToR) switch buffer size and increasing count of concurrently transmitting workers. This helps BMPTCP to avoid timeout events due to full window loss at ToR switch. Based on current congestion situation at ToR switches, BMPTCP adjust transmission rates of each worker’s subflow so that total amount of data transmitted by all concurrent subflows does not overflow bottleneck ToR switch buffer. Simulation results show that BMPTCP effectively alleviates MPTCP incast. It improves goodput, reduces flow completion time as compared to existing MPTCP and EW-MPTCP protocols.


2013 ◽  
Vol 462-463 ◽  
pp. 1028-1035
Author(s):  
Hang Xing Wu ◽  
Xiao Long Yang ◽  
Min Zhang

Nowadays data centers have been becoming increasingly important for various web applications, huge amount of cost is invested to maintain good performance of data center. Whereas some studies indicated that TCP Incast phenomenon was widely observed in most of data centers, which results in congestion in data centers and damages the performance of data center greatly. Thus some congestion control mechanisms for data center have been proposed to solve the problems. These mechanisms are categorized and described in this paper, and the advantages and disadvantages of these mechanisms are analyzed. Subsequently, some new interesting topics which may be worthy of further study in congestion control mechanism on data center are presented.


Author(s):  
Ahmad Nahar Quttoum

Today’s data center networks employ expensive networking equipments in associated structures that were not designed to meet the increasing requirements of the current large-scale data center services. Limitations that vary between reliability, resource utilization, and high costs are challenging. The era of cloud computing represents a promise to enable large-scale data centers. Computing platforms of such cloud service data centers consist of large number of commodity low-price servers that, with a theme of virtualization on top, can meet the performance of the expensive high-level servers at only a fraction of the price. Recently, the research in data center networks started to evolve rapidly. This opened the path for addressing many of its design and management challenges, these like scalability, reliability, bandwidth capacities, virtual machines’ migration, and cost. Bandwidth resource fragmentation limits the network agility, and leads to low utilization rates, not only for the bandwidth resources, but also for the servers that run the applications. With Traffic Engineering methods, managers of such networks can adapt for rapid changes in the network traffic among their servers, this can help to provide better resource utilization and lower costs. The market is going through exciting changes, and the need to run demanding-scale services drives the work toward cloud networks. These networks that are enabled by the notation of autonomic management, and the availability of commodity low-price network equipments. This work provides the readers with a survey that presents the management challenges, design and operational constraints of the cloud-service data center networks


2018 ◽  
Vol 7 (3.12) ◽  
pp. 19
Author(s):  
Amitkumar J. Nayak ◽  
Amit P. Ganatra

Today, there is a generalized standard usage of internet for all.The devices via multiple technologies that facilitates to provide few communication methods to scholars to work with. By forming multiple paths in the data center network, latest generation data centers offer maximum bandwidth with robustness. To utilize this bandwidth, it is necessary that different data flows take separate paths. In brief, a single-path transport seems inappropriate for such networks. By using Multipath TCP, we must reconsider data center networks, with a diverse approach as to the association between topology, transport protocols, routing. Multipath TCP allows certain topologies that single path TCP cannot use. In newer generation data centers, Multipath TCP is already deployable using extensively deployed technologies such as Equal-cost multipath routing. But, major benefits will come when data centers are specifically designed for multipath transports. Due to manifold of technologies like Cloud computing, social networking, and information networks there is a need to deploy the number of large data centers. While Transport Control Protocol is the leading Layer-3 transport protocol in data center networks, the operating conditions like high bandwidth, small-buffered switches, and traffic patterns causes poor performance of TCP.  Data Center TCP algorithm has newly been anticipated as a TCP option for data centers which address these limitations. It is worth noting that traditional TCP protocol.  


Author(s):  
Jianfei Zhang ◽  
◽  
Yuchen Jiang ◽  
Yan Liu

Data centers are fundamental facilities that support high-performance computing and large-scale data processing. To guarantee that a data center can provide excellent properties of expanding and routing, the interconnection network of a data center should be designed elaborately. Herein, we propose a novel structure for the interconnection network of data centers that can be expanded with a variable coefficient, also known as a variable expanding structure (VES). A VES is designed in a hierarchical manner and built iteratively. A VES can include hundreds of thousands and millions of servers with only a few layers. Meanwhile, a VES has an extremely short diameter, which implies better performance on routing between every pair of servers. Furthermore, we design an address space for the servers and switches in a VES. In addition, we propose a construction algorithm and routing algorithm associated with the address space. The results and analysis of simulations verify that the expanding rate of a VES depends on three factors: n, m, and k where the n is the number of ports on a switch, the m is the expanding speed and the k is the number of layers. However, the factor m yields the optimal effect. Hence, a VES can be designed with factor m to achieve the expected expanding rate and server scale based on the initial planning objectives.


2019 ◽  
Vol 63 (10) ◽  
pp. 1449-1462
Author(s):  
Binjie He ◽  
Dong Zhang ◽  
Chang Zhao

Abstract Modern data centers provide multiple parallel paths for end-to-end communications. Recent studies have been done on how to allocate rational paths for data flows to increase the throughput of data center networks. A centralized load balancing algorithm can improve the rationality of the path selection by using path bandwidth information. However, to ensure the accuracy of the information, current centralized load balancing algorithms monitor all the link bandwidth information in the path to determine the path bandwidth. Due to the excessive link bandwidth information monitored by the controller, however, much time is consumed, which is unacceptable for modern data centers. This paper proposes an algorithm called hidden Markov Model-based Load Balancing (HMMLB). HMMLB utilizes the hidden Markov Model (HMM) to select paths for data flows with fewer monitored links, less time cost, and approximate the same network throughput rate as a traditional centralized load balancing algorithm. To generate HMMLB, this research first turns the problem of path selection into an HMM problem. Secondly, deploying traditional centralized load balancing algorithms in the data center topology to collect training data. Finally, training the HMM with the collected data. Through simulation experiments, this paper verifies HMMLB’s effectiveness.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Rajat Gupta ◽  
Mona Aggarwal ◽  
Swaran Ahuja

Abstract Nowadays, the data centers have become a significant physical infrastructure for the purpose of supporting various Internet applications like cloud services, entertainment, web search and social networking. The traffic among the data centers is growing rapidly accompanied by the services that are obtained. The data centers are connected via fiber channels to support long-haul networks and high data rate transmissions by utilizing various modulation techniques. However, it has some drawbacks such as increased delay, computational complexities, high wavelength consumption, link failures, etc. Recently, researchers are focusing on improving the survivability and wavelength usage efficiency in optical data center networks. In this work, a novel framework depending on the concept of content connectivity is proposed for optical data center networks. Here, a mixed integer linear programming (MILP) is utilized for transmitting the data through optical data centers. The main intention of this research is to improve the performance and wavelength efficiency in optical data center networks. The performance of the MILP approach is evaluated and compared with the existing integer linear programming (ILP) technique and found that this new approach provides better performance with higher wavelength efficiency and reduced wavelength consumption.


2019 ◽  
Vol 40 (3) ◽  
pp. 225-238 ◽  
Author(s):  
Abhilasha Sharma ◽  
Sangeetha R G

Abstract The internet traffic is increasing exponentially with cloud services. This demands high and efficient data center networks (DCNs). Current DCNs are equipped with electronic counter parts which consumes high power to provide the cloud services. Optical interconnection network (OIN) architectures provide high scalability, low latency, high throughput and low power consumption. This paper presents a study of the OIN architectures as the future requirements of the DCNs are the need for high scalability and low latency. This paper also presents a comparative study of their average latency and scalability.


Cloud computing has led to the tremendous growth of IT organizations, which serves as the means of delivering services to large number of consumers globally, by providing anywhere, anytime easy access to resources and services. The primary concern over the increasing energy consumption by cloud data centers is mainly due to the massive emission of greenhouse gases, which contaminate the atmosphere and tend to worsen the environmental conditions. The major part of huge energy consumption comes from large servers, high speed storage devices and cooling equipment, present in cloud data centers. These serve as the basis for fulfilling the increasing need for computing resources. These in turn bestow additional cost of resources. The goal is to focus on energy savings through effective utilization of resources. This necessitates the need for developing a green-aware, energy-efficient framework for cloud data center networks. The Software Defined Networking (SDN) are chosen as they aid in studying the behaviour of networks from the overall perspective of software layer, rather than decisions from each individual device, as in case of conventional networks. The central objective of this paper is dedicated to survey on various existing SDN based energy efficient cloud data center networks.


Sign in / Sign up

Export Citation Format

Share Document