Exergy-Based Optimization Strategies for Multi-Component Data Center Thermal Management: Part I — Analysis

Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

As heat dissipation in data centers rises by orders of magnitude, inefficiencies such as recirculation will have an increasingly significant impact on the thermal manageability and energy efficiency of the cooling infrastructure. For example, prior work has shown that for simple data centers with a single Computer Room Air-Conditioning (CRAC) unit, an operating strategy that fails to account for inefficiencies in the air space can result in suboptimal performance. To enable system-wide optimality, an exergy-based approach to CRAC control has previously been proposed. However, application of such a strategy in a real data center environment is limited by the assumptions inherent to the single-CRAC derivation. This paper addresses these assumptions by modifying the exergy-based approach to account for the additional interactions encountered in a multi-component environment. It is shown that the modified formulation provides the framework necessary to evaluate performance of multi-component data center thermal management systems under widely different operating circumstances.

Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

Data centers today contain more computing and networking equipment than ever before. As a result, a higher amount of cooling is required to maintain facilities within operable temperature ranges. Increasing amounts of resources are spent to achieve thermal control, and tremendous potential benefit lies in the optimization of the cooling process. This paper describes a study performed on data center thermal management systems using the thermodynamic concept of exergy. Specifically, an exergy analysis has been performed on sample data centers in an attempt to identify local and overall inefficiencies within thermal management systems. The development of a model using finite volume analysis has been described, and potential applications to real-world systems have been illustrated. Preliminary results suggest that such an exergy-based analysis can be a useful tool in the design and enhancement of thermal management systems.


Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

Recent compaction and miniaturization of electronic equipment has caused a dramatic increase in the amount of heat dissipated within data centers housing compute, network, and storage systems. The efficient thermal management of these systems is complicated by the intricate interdependence among the various components of the thermal architecture, including the heat-dissipating computer racks, the Computer Room Air-Conditioning (CRAC) units, and the physical airspace within the room. To account for this interdependence, an approach based on the thermodynamic metric of exergy has been proposed, which allows for prediction of an optimal CRAC operating point that corresponds to the point of minimal irreversibility for the overall system. To validate the formulated theory, predictions from the model have been compared with actual data center power consumption measurements. Initial comparisons indicate good agreement, suggesting that the proposed theory has great applicability for efficient data center thermal management.


Author(s):  
Uschas Chowdhury ◽  
Walter Hendrix ◽  
Thomas Craft ◽  
Willis James ◽  
Ankit Sutaria ◽  
...  

Abstract In a data center, electronic equipment such as server and switches dissipate heat and the corresponding cooling systems contribute to typically 25–35% of total energy consumption. The heat load continues to increase as there is a greater need for miniaturization and convergence. In 2014, data centers in the U.S. consumed an estimated 70 billion kWh, representing about 1.8% of total U.S. electricity consumption. Based on current trend estimates, U.S. data centers are projected to consume approximately 73 billion kWh in 2020 [1]. Many research and strategies are adopted to minimize energy cost. The recommended dry bulb temperature for long-term operation and reliability for air cooling is between 18–27°C and the largest allowable inlet temperature range to operate at is between 5°C and 45°C with American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) enabling much broader allowable zones) [2]. But understanding a proper cooling system is very important especially for thermal management of IT equipment with high heat loads such as 1U or 2U multi-core, high-end servers and blade servers which provide more computing per watt. Many problems like high inlet temperature due to the mixing of hot air with cold air, local hot spots, lower system reliability, increased failure, and downtime may occur. Among many other approaches to managing high-density racks, in-row coolers are used in between racks to provide cold air and minimize local hot spots. This paper describes a computational study being performed by applying in-row coolers for different rack power configuration with and without aisle containment. The power, as well as the number of racks, are varied to study the effect of raised inlet temperature for the IT equipment in a Computational Fluid Dynamics (CFD) model developed in 6SigmaRoom with the help of built-in library items. A comparative analysis is also performed for a typical small-sized non-raised facility to investigate the efficacy and limitations of in-row coolers in thermal management of IT equipment with variation in rack heat load and containment. Several other aspects like a parametric study of variable opening areas of duct between racks and in-row coolers, the variation of operating flow rate and failure scenarios are also studied to find proper flow distribution, uniformity of outlet temperature and predict better performance, energy savings and reliability. The results are presented for general guidance for flexible and quick installation and safe operation of in-row coolers to improve thermal efficiency.


Author(s):  
Veerendra Mulay ◽  
Saket Karajgikar ◽  
Dereje Agonafer ◽  
Roger Schmidt ◽  
Madshusudan Iyengar ◽  
...  

The power trend for server systems continues to grow thereby making thermal management of data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. In prior work, numerous data center layouts employing raised floor plenum and the impact of design parameters such as plenum depth, ceiling height, cold isle location, tile openings and others on thermal performance of data center were presented. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat loads as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over air cooling limits of data centers similar to that of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in such situations. The impact of such an operating strategy on thermal management of data center is discussed in this paper. A representative data center is modeled using commercially available CFD code. The change in rack temperature gradients, recirculation cells and CRAC demand due to use of hybrid cooling is presented in a detailed parametric study. It is shown that the hybrid cooling strategy improves the cooling of data center which may enable full population of rack and better management of system infrastructure.


Author(s):  
Tianyi Gao ◽  
James Geer ◽  
Bahgat G. Sammakia ◽  
Russell Tipton ◽  
Mark Seymour

Cooling power constitutes a large portion of the total electrical power consumption in data centers. Approximately 25%∼40% of the electricity used within a production data center is consumed by the cooling system. Improving the cooling energy efficiency has attracted a great deal of research attention. Many strategies have been proposed for cutting the data center energy costs. One of the effective strategies for increasing the cooling efficiency is using dynamic thermal management. Another effective strategy is placing cooling devices (heat exchangers) closer to the source of heat. This is the basic design principle of many hybrid cooling systems and liquid cooling systems for data centers. Dynamic thermal management of data centers is a huge challenge, due to the fact that data centers are operated under complex dynamic conditions, even during normal operating conditions. In addition, hybrid cooling systems for data centers introduce additional localized cooling devices, such as in row cooling units and overhead coolers, which significantly increase the complexity of dynamic thermal management. Therefore, it is of paramount importance to characterize the dynamic responses of data centers under variations from different cooling units, such as cooling air flow rate variations. In this study, a detailed computational analysis of an in row cooler based hybrid cooled data center is conducted using a commercially available computational fluid dynamics (CFD) code. A representative CFD model for a raised floor data center with cold aisle-hot aisle arrangement fashion is developed. The hybrid cooling system is designed using perimeter CRAH units and localized in row cooling units. The CRAH unit supplies centralized cooling air to the under floor plenum, and the cooling air enters the cold aisle through perforated tiles. The in row cooling unit is located on the raised floor between the server racks. It supplies the cooling air directly to the cold aisle, and intakes hot air from the back of the racks (hot aisle). Therefore, two different cooling air sources are supplied to the cold aisle, but the ways they are delivered to the cold aisle are different. Several modeling cases are designed to study the transient effects of variations in the flow rates of the two cooling air sources. The server power and the cooling air flow variation combination scenarios are also modeled and studied. The detailed impacts of each modeling case on the rack inlet air temperature and cold aisle air flow distribution are studied. The results presented in this work provide an understanding of the effects of air flow variations on the thermal performance of data centers. The results and corresponding analysis is used for improving the running efficiency of this type of raised floor hybrid data centers using CRAH and IRC units.


Author(s):  
Milton Meckler

What does remain a growing concern for many users of Data Centers is their continuing availability following the explosive growth of internet services in recent years, The recent maximizing of Data Center IT virtualization investments has resulted in improving the consolidation of prior (under utilized) server and cabling resources resulting in higher overall facility utilization and IT capacity. It has also resulted in excessive levels of equipment heat release, e.g. high energy (i.e. blade type) servers and telecommunication equipment, that challenge central and distributed air conditioning systems delivering air via raised floor or overhead to rack mounted servers arranged in alternate facing cold and hot isles (in some cases reaching 30 kW/rack or 300 W/ft2) and returning via end of isle or separated room CRAC units, which are often found to fight each other, contributing to excessive energy use. Under those circumstances, hybrid, indirect liquid cooling facilities are often required to augment above referenced air conditioning systems in order to prevent overheating and degradation of mission critical IT equipment to maintain rack mounted subject rack mounted server equipment to continue to operate available within ASHRAE TC 9.9 prescribed task psychometric limits and IT manufacturers specifications, beyond which their operational reliability cannot be assured. Recent interest in new web-based software and secure cloud computing is expected to further accelerate the growth of Data Centers which according to a recent study, the estimated number of U.S. Data Centers in 2006 consumed approximately 61 billion kWh of electricity. Computer servers and supporting power infrastructure for the Internet are estimated to represent 1.5% of all electricity generated which along with aggregated IT and communications, including PC’s in current use have also been estimated to emit 2% of global carbon emissions. Therefore the projected eco-footprint of Data Centers into the future has now become a matter of growing concern. Accordingly our paper will focus on how best to improve the energy utilization of fossil fuels that are used to power Data Centers, the energy efficiency of related auxiliary cooling and power infrastructures, so as to reduce their eco-footprint and GHG emissions to sustainable levels as soon as possible. To this end, we plan to demonstrate significant comparative savings in annual energy use and reduction in associated annual GHG emissions by employing a on-site cogeneration system (in lieu of current reliance on remote electric power generation systems), introducing use of energy efficient outside air (OSA) desiccant assisted pre-conditioners to maintain either Class1, Class 2 and NEBS indoor air dew-points, as needed, when operated with modified existing (sensible only cooling and distributed air conditioning and chiller systems) thereby eliminating need for CRAC integral unit humidity controls while achieving a estimated 60 to 80% (virtualized) reduction in the number servers within a existing (hypothetical post-consolidation) 3.5 MW demand Data Center located in southeastern (and/or southern) U.S., coastal Puerto Rico, or Brazil characterized by three (3) representative microclimates ranging from moderate to high seasonal outside air (OSA) coincident design humidity and temperature.


Author(s):  
Ratnesh Sharma ◽  
Rocky Shih ◽  
Chandrakant Patel ◽  
John Sontag

Data centers are the computational hub of the next generation. Rise in demand for computing has driven the emergence of high density datacenters. With the advent of high density, mission-critical datacenters, demand for electrical power for compute and cooling has grown. Deployment of a large number of high powered computer systems in very dense configurations in racks within data centers will result in very high power densities at room level. Hosting business and mission-critical applications also demand a high degree of reliability and flexibility. Managing such high power levels in the data center with cost effective reliable cooling solutions is essential to feasibility of pervasive compute infrastructure. Energy consumption of data centers can also be severely increased by over-designed air handling systems and rack layouts that allow the hot and cold air streams to mix. Absence of rack level temperature monitoring has contributed to lack of knowledge of air flow patterns and thermal management issues in conventional data centers. In this paper, we present results from exploratory data analysis (EDA) of rack-level temperature data collected over a period of several months from a conventional production datacenter. Typical datacenters experience surges in power consumption due to rise and fall in compute demand. These surges can be long term, short term or periodic, leading to associated thermal management challenges. Some variations may also be machine-dependent and vary across the datacenter. Yet other thermal perturbations may be localized and momentary. Random variations due to sensor response and calibration, if not identified, may lead to erroneous conclusions and expensive faults. Among other indicators, EDA techniques also reveal relationships among sensors and deployed hardware in space and time. Identification of such patterns can provide significant insight into data center dynamics for future forecasting purposes. Knowledge of such metrics enables energy-efficient thermal management by helping to create strategies for normal operation and disaster recovery for use with techniques like dynamic smart cooling.


Climate ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. 110
Author(s):  
Alexandre F. Santos ◽  
Pedro D. Gaspar ◽  
Heraldo J. L. de Souza

Data Centers (DC) are specific buildings that require large infrastructures to store all the information needed by companies. All data transmitted over the network is stored on CDs. By the end of 2020, Data Centers will grow 53% worldwide. There are methodologies that measure the efficiency of energy consumption. The most used metric is the Power Usage Effectiveness (PUE) index, but it does not fully reflect efficiency. Three DC’s located at the cities of Curitiba, Londrina and Iguaçu Falls (Brazil) with close PUE values, are evaluated in this article using the Energy Usage Effectiveness Design (EUED) index as an alternative to the current method. EUED uses energy as a comparative element in the design phase. Infrastructure consumption is the sum of energy with Heating, Ventilating and Air conditioning (HVAC) equipment, equipment, lighting and others. The EUED values obtained were 1.245 (kWh/yr)/(kWh/yr), 1.313 (kWh/yr)/(kWh/yr) and 1.316 (kWh/yr)/(kWh/yr) to Curitiba, Londrina and Iguaçu Falls, respectively. The difference between the EUED and the PUE Constant External Air Temperature (COA) is 16.87% for Curitiba, 13.33% for Londrina and 13.30% for Iguaçu Falls. The new Perfect Design Data center (PDD) index prioritizes efficiency in increasing order is an easy index to interpret. It is a redefinition of EUED, given by a linear equation, which provides an approximate result and uses a classification table. It is a decision support index for the location of a Data Center in the project phase.


Energies ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 1265 ◽  
Author(s):  
Gequn Shu ◽  
Chen Hu ◽  
Hua Tian ◽  
Xiaoya Li ◽  
Zhigang Yu ◽  
...  

About 2/3 of the combustion energy of internal combustion engine (ICE) is lost through the exhaust and cooling systems during its operation. Besides, automobile accessories like the air conditioning system and the radiator fan will bring additional power consumption. To improve the ICE efficiency, this paper designs some coupled thermal management systems with different structures which include the air conditioning subsystem, the waste heat recovery subsystem, engine and coolant subsystem. CO2 is chosen as the working fluid for both the air conditioning subsystem and the waste heat recovery subsystem. After conducting experimental studies and a performance analysis for the subsystems, the coupled thermal management system is evaluated at different environmental temperatures and engine working conditions to choose the best structure. The optimal pump speed increases with the increase of environmental temperature and the decrease of engine load. The optimal coolant utilization rate decreases with the increase of engine load and environmental temperature, and the value is between 38% and 52%. While considering the effect of environmental temperature and road conditions of real driving and the energy consumption of all accessories of the thermal management system, the optimal thermal management system provides a net power of 4.2 kW, improving the ICE fuel economy by 1.2%.


Author(s):  
Chandrakant Patel ◽  
Ratnesh Sharma ◽  
Cullen Bash ◽  
Sven Graupner

Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At $100/MWh, the cooling alone would cost $4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustainability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center’s resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20°C. in New Delhi, India while Phoenix, USA is at 45°C. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo-fluids behavior of a data center in workload placement decision.


Sign in / Sign up

Export Citation Format

Share Document