scholarly journals A Dynamic Thermal-Allocation Solution to the Complex Economic Benefit for a Data Center

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Hui Liu ◽  
Wenyu Song ◽  
Tianqi Jin ◽  
Zhiyong Wu ◽  
Fusheng Yan ◽  
...  

Data centers, which provide computing services and gain profits, are indispensable to every city in the information era. They offer computation and storage while consuming energy and generate thermal discharges. To maximize the economic benefit, the existing research studies on the data center workload management mostly leverage the dynamical power model, i.e., the power-aware workload allocation. Nevertheless, we argue that for the complex relationship between the economic benefit and so many attributes, such as computation, energy consumption, thermal distribution, cooling, and equipment life, the thermal distribution dominates the others. Thus, thermal-aware workload allocation is more efficient. From the perspective of economic benefits, we propose a mathematical model for thermal distribution of a data center and study which workload distribution could determinately change the thermal distribution in the dynamic data center runtime, so as to reduce the cost and improve the economic benefits under the guarantee of service provisioning. By solving the thermal environment evaluation indexes, RHI (Return Heat Index) and RTI (Return Temperature Index), as well as heat dissipation models, we define quantitative models for the economic analysis such as energy consumption model for the busy servers and cooling, energy price model, and the profit model of data centers. Numerical simulation results validate our propositions and show that the average temperature of the data center reaches the best values, and the local hot spots are avoided effectively in various situations. As a conclusion, our studies contribute to the thermal management of the dynamic data center runtime for better economic benefits.

Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


2020 ◽  
Vol 12 (8) ◽  
pp. 3140 ◽  
Author(s):  
Pei Pei ◽  
Zongjie Huo ◽  
Oscar Sanjuán Martínez ◽  
Rubén González Crespo

Presently, energy is considered a significant resource that grows scarce with high demand and population in the global market. Therefore, a survey suggested that renewable energy sources are required to avoid scarcity. Hence, in this paper, a smart, sustainable probability distribution hybridized genetic approach (SSPD-HG) has been proposed to decrease energy consumption and minimize the total completion time for a single machine in smart city machine interface platforms. Further, the estimated set of non-dominated alternative using a multi-objective genetic algorithm has been hybridized to address the problem, which is mathematically computed in this research. This paper discusses the need to promote the integration of green energy to reduce energy use costs by balancing regional loads. Further, the timely production of delay-tolerant working loads and the management of thermal storage at data centers has been analyzed in this research. In addition, differences in bandwidth rates between users and data centers are taken into account and analyzed at a lab scale using SSPD-HG for energy-saving costs and managing a balanced workload.


Author(s):  
N. Fumo ◽  
V. Bortone ◽  
J. C. Zambrano

Data centers are facilities that primarily contain electronic equipment used for data processing, data storage, and communications networking. Regardless of their use and configuration, most data centers are more energy intensive than other buildings. The continuous operation of Information Technology equipment and power delivery systems generates a significant amount of heat that must be removed from the data center for the electronic equipment to operate properly. Since data centers spend up to half their energy on cooling, cooling systems becomes a key factor for energy consumption reduction strategies and alternatives in data centers. This paper presents a theoretical analysis of an absorption chiller driven by solar thermal energy as cooling plant alternative for data centers. Source primary energy consumption is used to compare the performance of different solar cooling plants with a standard cooling plant. The solar cooling plants correspond to different combinations of solar collector arrays and thermal storage tank, with a boiler as source of energy to ensure continuous operation of the absorption chiller. The standard cooling plant uses an electric chiller. Results suggest that the solar cooling plant with flat-plate solar collectors is a better option over the solar cooling plant with evacuated-tube solar collectors. However, although solar cooling plants can decrease the primary energy consumption when compared with the standard cooling plant, the net present value of the cost to install and operate the solar cooling plants are higher than the one for the standard cooling plant.


Information ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 113 ◽  
Author(s):  
Joao Ferreira ◽  
Gustavo Callou ◽  
Albert Josua ◽  
Dietmar Tutsch ◽  
Paulo Maciel

Due to the high demands of new technologies such as social networks, e-commerce and cloud computing, more energy is being consumed in order to store all the data produced and provide the high availability required. Over the years, this increase in energy consumption has brought about a rise in both the environmental impacts and operational costs. Some companies have adopted the concept of a green data center, which is related to electricity consumption and CO2 emissions, according to the utility power source adopted. In Brazil, almost 70% of electrical power is derived from clean electricity generation, whereas in China 65% of generated electricity comes from coal. In addition, the value per kWh in the US is much lower than in other countries surveyed. In the present work, we conducted an integrated evaluation of costs and CO2 emissions of the electrical infrastructure in data centers, considering the different energy sources adopted by each country. We used a multi-layered artificial neural network, which could forecast consumption over the following months, based on the energy consumption history of the data center. All these features were supported by a tool, the applicability of which was demonstrated through a case study that computed the CO2 emissions and operational costs of a data center using the energy mix adopted in Brazil, China, Germany and the US. China presented the highest CO2 emissions, with 41,445 tons per year in 2014, followed by the US and Germany, with 37,177 and 35,883, respectively. Brazil, with 8459 tons, proved to be the cleanest. Additionally, this study also estimated the operational costs assuming that the same data center consumes energy as if it were in China, Germany and Brazil. China presented the highest kWh/year. Therefore, the best choice according to operational costs, considering the price of energy per kWh, is the US and the worst is China. Considering both operational costs and CO2 emissions, Brazil would be the best option.


Author(s):  
Hui Chen ◽  
Mukil Kesavan ◽  
Karsten Schwan ◽  
Ada Gavrilovska ◽  
Pramod Kumar ◽  
...  

Energy efficiency in data center operation depends on many factors, including power distribution, thermal load and consequent cooling costs, and IT management in terms of how and where IT load is placed and moved under changing request loads. Current methods provided by vendors consolidate IT loads onto the smallest number of machines needed to meet application requirements. This paper’s goal is to gain further improvements in energy efficiency by also making such methods ‘spatially aware’, so that load is placed onto machines in ways that respect the efficiency of both cooling and power usage, across and within racks. To help implement spatially aware load placement, we propose a model-based reinforcement learning method to learn and then predict the thermal distribution of different placements for incoming workloads. The method is trained with actual data captured in a fully instrumented data center facility. Experimental results showing notable differences in total power consumption for representative application loads indicate the utility of a two-level spatially-aware workload management (SpAWM) technique in which (i) load is distributed across racks in ways that recognize differences in cooling efficiencies and (ii) within racks, load is distributed so as to take into account cooling effectiveness due to local air flow. The technique is being implemented using online methods that continuously monitor current power and resource usage within and across racks, sense BladeCenter-level inlet temperatures, understand and manage IT load according to an environment’s thermal map. Specifically, at data center level, monitoring informs SpAWM about power usage and thermal distribution across racks. At rack-level, SpAWM workload distribution is based on power caps provided by maximum inlet temperatures determined by CRAC speeds and supply air temperature. SpAWM can be realized as a set of management methods running in VMWare’s ESXServer virtualization infrastructure. Its use has the potential of attaining up to 32% improvements on the CRAC supply temperature requirement compared to non-spatially aware techniques, which can lower the inlet temperature 2∼3°C, that is to say we can increase the CRAC supply temperature 2∼3°C to save nearly 13% −18% cooling energy.


2020 ◽  
Vol 16 (6) ◽  
pp. 155014772093577
Author(s):  
Zan Yao ◽  
Ying Wang ◽  
Xuesong Qiu

With the rapid development of data centers in smart cities, how to reduce energy consumption and how to raise economic benefits and network performance are becoming an important research subject. In particular, data center networks do not always run at full load, which leads to significant energy consumption. In this article, we focus on the energy-efficient routing problem in software-defined network–based data center networks. For the scenario of in-band control mode of software-defined data centers, we formulate the dual optimal objective of energy-saving and the load balancing between controllers. In order to cope with a large solution space, we design the deep Q-network-based energy-efficient routing algorithm to find the energy-efficient data paths for traffic flow and control paths for switches. The simulation result reveals that the deep Q-network-based energy-efficient routing algorithm only trains part of the states and gets a good energy-saving effect and load balancing in control plane. Compared with the solver and the CERA heuristic algorithm, energy-saving effect of the deep Q-network-based energy-efficient routing algorithm is almost the same as the heuristic algorithm; however, its calculation time is reduced a lot, especially in a large number of flow scenarios; and it is more flexible to design and resolve the multi-objective optimization problem.


2020 ◽  
Vol 143 (2) ◽  
Author(s):  
Sebastian Araya ◽  
Aaron P. Wemhoff ◽  
Gerard F. Jones ◽  
Amy S. Fleischer

Abstract The ongoing growth in data center rack power density leads to an increased capability for waste heat recovery. Recent studies revealed the organic Rankine cycle (ORC) as a viable means for data center waste heat recovery since the ORC uses waste heat to generate on-site, low-cost electricity, which can produce economic benefits by reducing the overall data center power consumption. This paper describes the first experimental and theoretical study of a lab-scale ORC designed for ultralow grade (40–85 °C) waste heat conditions typical of a data center server rack, and it outlines the implementation of a similar ORC system for a data center. The experimental results show thermal efficiencies ranging from 1.9% at 43 °C to 4.6% at 81 °C. The largest contributors to ORC exergy destruction are the evaporator and condenser due to large fluid temperature differences in the heat exchangers. The average isentropic efficiency of the expander is 70%. A second-law analysis estimates a reduction of 4–8% in data center power requirements when ORC power is fed back into the servers at a waste heat temperature of 90 °C. The data from the lab-scale experiment, when complemented by the thermodynamic model, provide the necessary first step toward advancing this type of waste heat recovery for data centers (DCs).


Author(s):  
Dan Comperchio ◽  
Sameer Behere

Data center cooling systems have long been burdened by high levels of redundancy requirements, resulting in inefficient system designs to satisfy a risk-adverse operating environment. As attitudes, technologies, and sustainability awareness change within the industry, data centers are beginning to realize higher levels of energy efficiency without sacrificing operational security. By exploiting the increased temperature and humidity tolerances of the information technology equipment (ITE), data center mechanical systems can leverage ambient conditions to operate in economization mode for increased times during the year. Economization provides one of the largest methodologies for data centers to reduce their energy consumption and carbon footprint. As outside air temperatures and conditions become more favorable for cooling the data center, mechanical cooling through vapor-compression cycles is reduced or entirely eliminated. One favorable method for utilizing low outside air temperatures without sacrificing indoor air quality is through deploying rotary heat wheels to transfer heat between the data center return air and outside air without introducing outside air into the white space. A metal corrugated wheel is rotated through two opposing airstreams with varying thermal gradients to provide a net cooling effect at significantly reduced electrical energy over traditional mechanical cooling topologies. To further extend the impacts of economization, data centers are also able to significantly raise operating temperatures beyond what is traditionally found in comfort cooling applications. The increase in the dry bulb temperature provided to the inlet of the information technology equipment, as well as an elevated temperature rise across the equipment significantly reduces the energy use within a data center.


Author(s):  
Adrienne B. Little ◽  
Srinivas Garimella

Of the total electricity consumption by the United States in 2006, more than 1% was used on data centers alone; a value that continues to rise rapidly. Of the total amount of electricity a data center consumes, at least 30% is used to cool server equipment. The present study conceptualizes and analyzes a novel paradigm consisting of integrated power, cooling, and waste heat recovery and upgrade systems that considerably lowers the energy footprint of data centers. Thus, on-site power generation equipment is used to supply primary electricity needs of the data center. The microturbine-derived waste heat is recovered to run an absorption chiller that supplies the entire cooling load of the data center, essentially providing the requisite cooling without any additional expenditure of primary energy. Furthermore, the waste heat rejected by the data center itself is boosted to a higher temperature with a heat transformer, with the upgraded thermal stream serving as an additional output of the data center with no additional electrical power input. Such upgraded heat can be used for district heating applications in neighboring residential buildings, or as process heat for commercial end uses such as laundries, hospitals and restaurants. With such a system, the primary energy usage of the data center as a whole can be reduced by about 23 percent while still addressing the high-flux cooling loads, in addition to providing a new income stream through the sales of upgraded thermal energy. Given the large and fast-escalating energy consumption patterns of data centers, this novel, integrated approach to electricity and cooling supply, and waste heat recovery and upgrade will substantially reduce primary energy consumption for this important end use worldwide.


Author(s):  
SIVARANJANI BALAKRISHNAN ◽  
SURENDRAN DORAISWAMY

Data centers are becoming the main backbone of and centralized repository for all cloud-accessible services in on-demand cloud computing environments. In particular, virtual data centers (VDCs) facilitate the virtualization of all data center resources such as computing, memory, storage, and networking equipment as a single unit. It is necessary to use the data center efficiently to improve its profitability. The essential factor that significantly influences efficiency is the average number of VDC requests serviced by the infrastructure provider, and the optimal allocation of requests improves the acceptance rate. In existing VDC request embedding algorithms, data center performance factors such as resource utilization rate and energy consumption are not taken into consideration. This motivated us to design a strategy for improving the resource utilization rate without increasing the energy consumption. We propose novel VDC embedding methods based on row-epitaxial and batched greedy algorithms inspired by bioinformatics. These algorithms embed new requests into the VDC while reembedding previously allocated requests. Reembedding is done to consolidate the available resources in the VDC resource pool. The experimental testbed results show that our algorithms boost the data center objectives of high resource utilization (by improving the request acceptance rate), low energy consumption, and short VDC request scheduling delay, leading to an appreciable return on investment.


Sign in / Sign up

Export Citation Format

Share Document