scholarly journals Modeling Transport Layer Protocol Behaviour to Improve Data Center Network Performance

2018 ◽  
Vol 7 (3.12) ◽  
pp. 19
Author(s):  
Amitkumar J. Nayak ◽  
Amit P. Ganatra

Today, there is a generalized standard usage of internet for all.The devices via multiple technologies that facilitates to provide few communication methods to scholars to work with. By forming multiple paths in the data center network, latest generation data centers offer maximum bandwidth with robustness. To utilize this bandwidth, it is necessary that different data flows take separate paths. In brief, a single-path transport seems inappropriate for such networks. By using Multipath TCP, we must reconsider data center networks, with a diverse approach as to the association between topology, transport protocols, routing. Multipath TCP allows certain topologies that single path TCP cannot use. In newer generation data centers, Multipath TCP is already deployable using extensively deployed technologies such as Equal-cost multipath routing. But, major benefits will come when data centers are specifically designed for multipath transports. Due to manifold of technologies like Cloud computing, social networking, and information networks there is a need to deploy the number of large data centers. While Transport Control Protocol is the leading Layer-3 transport protocol in data center networks, the operating conditions like high bandwidth, small-buffered switches, and traffic patterns causes poor performance of TCP.  Data Center TCP algorithm has newly been anticipated as a TCP option for data centers which address these limitations. It is worth noting that traditional TCP protocol.  

Author(s):  
Mahendra Suryavanshi ◽  
Dr. Ajay Kumar ◽  
Dr. Jyoti Yadav

Recent data centers provide dense inter-connectivity between each pair of servers through multiple paths. These data centers offer high aggregate bandwidth and robustness by using multiple paths simultaneously. Multipath TCP (MPTCP) protocol is developed for improving throughput, fairly sharing network link capacity and providing robustness during path failure by utilizing multiple paths over multi-homed data center networks. Running MPTCP protocol for latency-sensitive rack-local short flows with many-to-one communication pattern at the access layer of multi-homed data center networks creates MPTCP incast problem. In this paper, Balanced Multipath TCP (BMPTCP) protocol is proposed to mitigate MPTCP incast problem in multi-homed data center networks. BMPTCP is a window-based congestion control protocol that prevents constant growth of each worker’s subflow congestion window size. BMPTCP computes identical congestion window size for all concurrent subflows by considering bottleneck Top of Rack (ToR) switch buffer size and increasing count of concurrently transmitting workers. This helps BMPTCP to avoid timeout events due to full window loss at ToR switch. Based on current congestion situation at ToR switches, BMPTCP adjust transmission rates of each worker’s subflow so that total amount of data transmitted by all concurrent subflows does not overflow bottleneck ToR switch buffer. Simulation results show that BMPTCP effectively alleviates MPTCP incast. It improves goodput, reduces flow completion time as compared to existing MPTCP and EW-MPTCP protocols.


2021 ◽  
Vol 11 (3) ◽  
pp. 72-91
Author(s):  
Priyanka H. ◽  
Mary Cherian

Cloud computing has become more prominent, and it is used in large data centers. Distribution of well-organized resources (bandwidth, CPU, and memory) is the major problem in the data centers. The genetically enhanced shuffling frog leaping algorithm (GESFLA) framework is proposed to select the optimal virtual machines to schedule the tasks and allocate them in physical machines (PMs). The proposed GESFLA-based resource allocation technique is useful in minimizing the wastage of resource usage and also minimizes the power consumption of the data center. The proposed GESFL algorithm is compared with task-based particle swarm optimization (TBPSO) for efficiency. The experimental results show the excellence of GESFLA over TBPSO in terms of resource usage ratio, migration time, and total execution time. The proposed GESFLA framework reduces the energy consumption of data center up to 79%, migration time by 67%, and CPU utilization is improved by 9% for Planet Lab workload traces. For the random workload, the execution time is minimized by 71%, transfer time is reduced up to 99%, and the CPU consumption is improved by 17% when compared to TBPSO.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Author(s):  
Muhammad Ishaq ◽  
Mohammad Kaleem ◽  
Numan Kifayat

This chapter briefly introduces the data center network and reviews the challenges for future intra-data-center networks in terms of scalability, cost effectiveness, power efficiency, upgrade cost, and bandwidth utilization. Current data center network architecture is discussed in detail and the drawbacks are pointed out in terms of the above-mentioned parameters. A detailed background is provided that how the technology moved from opaque to transparent optical networks. Additionally, it includes different data center network architectures proposed so far by different researchers/team/companies in order to address the current problems and meet the demands of future intra-data-center networks.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 31782-31790 ◽  
Author(s):  
Jin Ye ◽  
Luting Feng ◽  
Ziqi Xie ◽  
Jiawei Huang ◽  
Xiaohuan Li

Author(s):  
Thomas J. Breen ◽  
Ed J. Walsh ◽  
Jeff Punch ◽  
Amip J. Shah ◽  
Niru Kumari ◽  
...  

As the energy footprint of data centers continues to increase, models that allow for “what-if” simulations of different data center design and management paradigms will be important. Prior work by the authors has described a multi-scale energy efficiency model that allows for evaluating the coefficient of performance of the data center ensemble (COPGrand), and demonstrated the utility of such a model for purposes of choosing operational set-points and evaluating design trade-offs. However, experimental validation of these models poses a challenge because of the complexity involved with tailoring such a model for implementation to legacy data centers, with shared infrastructure and limited control over IT workload. Further, test facilities with dummy heat loads or artificial racks in lieu of IT equipment generally have limited utility in validating end-to-end models owing to the inability of such loads to mimic phenomena such as fan scalability, etc. In this work, we describe the experimental analysis conducted in a special test chamber and data center facility. The chamber, focusing on system level effects, is loaded with an actual IT rack, and a compressor delivers chilled air to the chamber at a preset temperature. By varying the load in the IT rack as well as the air delivery parameters — such as flow rate, supply temperature, etc. — a setup which simulates the system level of a data center is created. Experimental tests within a live data center facility are also conducted where the operating conditions of the cooling infrastructure are monitored — such as fluid temperatures, flow rates, etc. — and can be analyzed to determine effects such as air flow recirculation, heat exchanger performance, etc. Using the experimental data a multi-scale model configuration emulating the data center can be defined. We compare the results from such experimental analysis to a multi-scale energy efficiency model of the data center, and discuss the accuracies as well as inaccuracies within such a model. Difficulties encountered in the experimental work are discussed. The paper concludes by discussing areas for improvement in such modeling and experimental evaluation. Further validation of the complete multi-scale data center energy model is planned.


Energies ◽  
2020 ◽  
Vol 13 (22) ◽  
pp. 6147
Author(s):  
Jinkyun Cho ◽  
Jesang Woo ◽  
Beungyong Park ◽  
Taesub Lim

Removing heat from high-density information technology (IT) equipment is essential for data centers. Maintaining the proper operating environment for IT equipment can be expensive. Rising energy cost and energy consumption has prompted data centers to consider hot aisle and cold aisle containment strategies, which can improve the energy efficiency and maintain the recommended level of inlet air temperature to IT equipment. It can also resolve hot spots in traditional uncontained data centers to some degree. This study analyzes the IT environment of the hot aisle containment (HAC) system, which has been considered an essential solution for high-density data centers. The thermal performance was analyzed for an IT server room with HAC in a reference data center. Computational fluid dynamics analysis was conducted to compare the operating performances of the cooling air distribution systems applied to the raised and hard floors and to examine the difference in the IT environment between the server rooms. Regarding operating conditions, the thermal performances in a state wherein the cooling system operated normally and another wherein one unit had failed were compared. The thermal performance of each alternative was evaluated by comparing the temperature distribution, airflow distribution, inlet air temperatures of the server racks, and recirculation ratio from the outlet to the inlet. In conclusion, the HAC system with a raised floor has higher cooling efficiency than that with a hard floor. The HAC with a raised floor over a hard floor can improve the air distribution efficiency by 28%. This corresponds to 40% reduction in the recirculation ratio for more than 20% of the normal cooling conditions. The main contribution of this paper is that it realistically implements the effectiveness of the existing theoretical comparison of the HAC system by developing an accurate numerical model of a data center with a high-density fifth-generation (5G) environment and applying the operating conditions.


Author(s):  
Jimil M. Shah ◽  
Roshan Anand ◽  
Satyam Saini ◽  
Rawhan Cyriac ◽  
Dereje Agonafer ◽  
...  

Abstract A remarkable amount of data center energy is consumed in eliminating the heat generated by the IT equipment to maintain and ensure safe operating conditions and optimum performance. The installation of Airside Economizers, while very energy efficient, bears the risk of particulate contamination in data centers, hence, deteriorating the reliability of IT equipment. When RH in data centers exceeds the deliquescent relative humidity (DRH) of salts or accumulated particulate matter, it absorbs moisture, becomes wet and subsequently leads to electrical short circuiting because of degraded surface insulation resistance between the closely spaced features on printed circuit boards. Another concern with this type of failure is the absence of evidence that hinders the process of evaluation and rectification. Therefore, it is imperative to develop a practical test method to determine the DRH value of the accumulated particulate matter found on PCBs (Printed Circuit Boards). This research is a first attempt to develop an experimental technique to measure the DRH of dust particles by logging the leakage current versus RH% (Relative Humidity percentage) for the particulate matter dispensed on an interdigitated comb coupon. To validate this methodology, the DRH of pure salts like MgCl2, NH4NO3 and NaCl is determined and their results are then compared with their published values. This methodology was therefore implemented to help lay a modus operandi of establishing the limiting value or an effective relative humidity envelope to be maintained at a real-world data center facility situated in Dallas industrial area for its continuous and reliable operation.


Sign in / Sign up

Export Citation Format

Share Document