Optical Switching in Next-Generation Data Centers

Author(s):  
Vaibhav Shukla ◽  
Rajiv Srivastava ◽  
Dilip Kumar Choubey

The leading content provider companies like Google, Yahoo, and Amazon installed mega-data centers that contain hundreds of thousands of servers in very large scale. The current data center systems are organized in the form of the hierarchal tree structure based on bandwidth-limited electronic switches. Modern data center systems face a number of issues like high power consumption, limited bandwidth availability, server connectivity, energy and cost efficiency, traffic complexity, etc. One of the most feasible solution of these issues is the use of optical switching technologies in the core of data center systems. In this chapter a brief description about the modern data center system is presented, and some prominent optical packet switch architectures are also presented in this chapter with their pros and cons.

2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


2017 ◽  
Vol 27 (3) ◽  
pp. 605-622 ◽  
Author(s):  
Marcin Markowski

AbstractIn recent years elastic optical networks have been perceived as a prospective choice for future optical networks due to better adjustment and utilization of optical resources than is the case with traditional wavelength division multiplexing networks. In the paper we investigate the elastic architecture as the communication network for distributed data centers. We address the problems of optimization of routing and spectrum assignment for large-scale computing systems based on an elastic optical architecture; particularly, we concentrate on anycast user to data center traffic optimization. We assume that computational resources of data centers are limited. For this offline problems we formulate the integer linear programming model and propose a few heuristics, including a meta-heuristic algorithm based on a tabu search method. We report computational results, presenting the quality of approximate solutions and efficiency of the proposed heuristics, and we also analyze and compare some data center allocation scenarios.


2021 ◽  
Vol 850 (1) ◽  
pp. 012018
Author(s):  
T Renugadevi ◽  
D Hari Prasanth ◽  
Appili Yaswanth ◽  
K Muthukumar ◽  
M Venkatesan

Abstract Data centers are large-scale data storage and processing systems. It is made up of a number of servers that must be capable of handling large amount of data. As a result, data centers generate a significant quantity of heat, which must be cooled and kept at an optimal temperature to avoid overheating. To address this problem, thermal analysis of the data center is carried out using numerical methods. The CFD model consists of a micro data center, where conjugate heat transfer effects are studied. A micro data center consists of servers aligned with air gaps alternatively and cooling air is passed between the air gaps to remove heat. In the present work, the design of data center rack is made in such a way that the cold air is in close proximity to servers. The temperature and airflow in the data center are estimated using the model. The air gap is optimally designed for the cooling unit. Temperature distribution of various load configurations is studied. The objective of the study is to find a favorable loading configuration of the micro data center for various loads and effectiveness of distribution of load among the servers.


2018 ◽  
Vol 0 (0) ◽  
Author(s):  
Arunendra Singh ◽  
Amod Kumar Tiwari

AbstractDue to the explosive growth in internet traffic, servers are facing bottleneck in speed and bandwidth requirements. To meet these ever increasing demands fiber optic technology can be used. This paper discusses, a hybrid buffer based optical packet switch design with its pros and cons. This hybrid buffer offers both electronic and optical buffering. This paper investigates the packet loss rate (PLR) performance of hybrid buffer under variety of conditions. The usage of buffer under different loading conditions has discussed. The sharing of electronic and optical buffers with respect to arrival of traffic is simulated, and delay ratio of electronic to optical buffering is computed. Average energy consumption at various loading condition is evaluated. The performance of switch is also investigated in a hypothetical five nodes network, and PLR is obtained under four different combinations of buffering and deflection of contending packets.


2013 ◽  
Vol 5 (6) ◽  
pp. 565 ◽  
Author(s):  
Nicola Calabretta ◽  
Roger Pueyo Centelles ◽  
Stefano Di Lucente ◽  
Harmen J. S. Dorren

2019 ◽  
Vol 5 ◽  
pp. e211
Author(s):  
Hadi Khani ◽  
Hamed Khanmirza

Cloud computing technology has been a game changer in recent years. Cloud computing providers promise cost-effective and on-demand resource computing for their users. Cloud computing providers are running the workloads of users as virtual machines (VMs) in a large-scale data center consisting a few thousands physical servers. Cloud data centers face highly dynamic workloads varying over time and many short tasks that demand quick resource management decisions. These data centers are large scale and the behavior of workload is unpredictable. The incoming VM must be assigned onto the proper physical machine (PM) in order to keep a balance between power consumption and quality of service. The scale and agility of cloud computing data centers are unprecedented so the previous approaches are fruitless. We suggest an analytical model for cloud computing data centers when the number of PMs in the data center is large. In particular, we focus on the assignment of VM onto PMs regardless of their current load. For exponential VM arrival with general distribution sojourn time, the mean power consumption is calculated. Then, we show the minimum power consumption under quality of service constraint will be achieved with randomize assignment of incoming VMs onto PMs. Extensive simulation supports the validity of our analytical model.


Author(s):  
Long Phan ◽  
Cheng-Xian Lin ◽  
Mackenson Telusma

Energy consumption and thermal management have become key challenges in the design of large-scale data centers, where perforated tiles are used together with cold and hot aisles configuration to improve thermal management. Although full-field simulations using computational fluid dynamics and heat transfer (CFD/HT) tools can be applied to predict the flow and temperature fields inside data centers, their running time remains the biggest challenge to most modelers. In this paper, response surface methodology based on radial basis function is used to drastically reduce the running time while preserving the accuracy of the model. Response surface method with data interpolation allows the study of many design parameters of data center model more feasible and economical in terms of modeling time. Three scenarios of response surface construction are investigated (5%, 10%, and 20%). The method shows very good agreement with the simulation results obtained from CFD/HT model as in the case of 20% of the original CFD data points used for response surface training. Error analysis is carried out to quantify the error associated with each scenario. Case 20% shows superb accuracy as compared to others. With only 2.12 × 104 in mean relative error and R2 = 0.970, the case can capture most of the aspects of the original CFD model.


Sign in / Sign up

Export Citation Format

Share Document