Server Rack Rear Door Heat Exchanger and the New ASHRAE Recommended Environmental Guidelines

Author(s):  
Roger Schmidt ◽  
Madhusudan Iyengar

The patented [1] rear door heat exchanger mounted to the rear of IT equipment racks was announced in April, 2005 by IBM and has shown improvements in data center energy efficiency and reducing hot spots. It also allows data center operators to more easily implement some of the features of the newly approved ASHRAE data center recommended data center environmental guidelines [2]. This paper will describe several case studies of implementing the rear door heat exchanger in various data center layouts. The implementation of the water cooled rear door in these data centers will show the effects of various failure modes and how the new ASHRAE environmental temperature guidelines are still being met with the failure modes examined.

Author(s):  
Tianyi Gao ◽  
Bahgat Sammakia ◽  
James Geer ◽  
Milnes David ◽  
Roger Schmidt

Heat exchangers are key components that are commonly used in data center cooling systems. Rear door heat exchangers, in-row coolers, overhead coolers and fully contained cabinets are some examples of liquid and hybrid cooling systems used in data centers. A liquid to liquid heat exchanger is one of the main components of the Coolant Distribution Unit (CDU), which supplies chilled water to the heat exchangers mentioned above. Computer Room Air Conditioner (CRAC) units also consist of liquid to air cross flow heat exchangers. Optimizing the energy use and the reliability of IT equipment in data centers requires Computational Fluid Dynamics (CFD) tools that can accurately model data centers for both the steady state and dynamic operations. Typically, data centers operate in dynamic conditions due to workload allocations that change both spatially and temporally. Additional dynamic situations may also arise due to failures in the thermal management and electrical distribution systems. In the computational simulation, individual component models, such as transient heat exchanger models, are therefore needed. It is also important to develop simple, yet accurate, compact models for components, such as heat exchangers, to reduce the computational time without decreasing simulation accuracy. In this study, a method for modeling compact transient heat exchangers using CFD code is presented. The method describes an approach for installing thermal dynamic heat exchanger models in CFD codes. The transient effectiveness concept and model are used in the development of the methodology. Heat exchanger CFD compact models are developed and tested by comparing them with full thermal dynamic models, and also with experimental measurements. The transient responses of the CFD model are presented for step and ramp change in flow rates of the hot and cold fluids, as well as step, ramp, and exponential variation in the inlet temperature. Finally, some practical dynamic scenarios involving IBM buffer liquid to liquid heat exchanger, rear door heat exchanger, and CRAC unit, are parametrically modeled to test the developed methodology. It is shown that the compact heat exchanger model can be used to successfully predict dynamic scenarios in typical data centers.


Author(s):  
Chris Muller ◽  
Chuck Arent ◽  
Henry Yu

Abstract Lead-free manufacturing regulations, reduction in circuit board feature sizes and the miniaturization of components to improve hardware performance have combined to make data center IT equipment more prone to attack by corrosive contaminants. Manufacturers are under pressure to control contamination in the data center environment and maintaining acceptable limits is now critical to the continued reliable operation of datacom and IT equipment. This paper will discuss ongoing reliability issues with electronic equipment in data centers and will present updates on ongoing contamination concerns, standards activities, and case studies from several different locations illustrating the successful application of contamination assessment, control, and monitoring programs to eliminate electronic equipment failures.


Author(s):  
Abdlmonem H. Beitelmal ◽  
Drazen Fabris

New servers and data center metrics are introduced to facilitate proper evaluation of data centers power and cooling efficiency. These metrics will be used to help reduce the cost of operation and to provision data centers cooling resources. The most relevant variables for these metrics are identified and they are: the total facility power, the servers’ idle power, the average servers’ utilization, the cooling resources power and the total IT equipment power. These metrics can be used to characterize and classify servers and data centers performance and energy efficiency regardless of their size and location.


Author(s):  
Thomas J. Breen ◽  
Ed J. Walsh ◽  
Jeff Punch ◽  
Amip J. Shah ◽  
Niru Kumari ◽  
...  

As the energy footprint of data centers continues to increase, models that allow for “what-if” simulations of different data center design and management paradigms will be important. Prior work by the authors has described a multi-scale energy efficiency model that allows for evaluating the coefficient of performance of the data center ensemble (COPGrand), and demonstrated the utility of such a model for purposes of choosing operational set-points and evaluating design trade-offs. However, experimental validation of these models poses a challenge because of the complexity involved with tailoring such a model for implementation to legacy data centers, with shared infrastructure and limited control over IT workload. Further, test facilities with dummy heat loads or artificial racks in lieu of IT equipment generally have limited utility in validating end-to-end models owing to the inability of such loads to mimic phenomena such as fan scalability, etc. In this work, we describe the experimental analysis conducted in a special test chamber and data center facility. The chamber, focusing on system level effects, is loaded with an actual IT rack, and a compressor delivers chilled air to the chamber at a preset temperature. By varying the load in the IT rack as well as the air delivery parameters — such as flow rate, supply temperature, etc. — a setup which simulates the system level of a data center is created. Experimental tests within a live data center facility are also conducted where the operating conditions of the cooling infrastructure are monitored — such as fluid temperatures, flow rates, etc. — and can be analyzed to determine effects such as air flow recirculation, heat exchanger performance, etc. Using the experimental data a multi-scale model configuration emulating the data center can be defined. We compare the results from such experimental analysis to a multi-scale energy efficiency model of the data center, and discuss the accuracies as well as inaccuracies within such a model. Difficulties encountered in the experimental work are discussed. The paper concludes by discussing areas for improvement in such modeling and experimental evaluation. Further validation of the complete multi-scale data center energy model is planned.


Author(s):  
Chandrakant Patel ◽  
Ratnesh Sharma ◽  
Cullen Bash ◽  
Sven Graupner

Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At $100/MWh, the cooling alone would cost $4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustainability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center’s resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20°C. in New Delhi, India while Phoenix, USA is at 45°C. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo-fluids behavior of a data center in workload placement decision.


Author(s):  
Michael K. Patterson ◽  
Michael Meakins ◽  
Dennis Nasont ◽  
Prasad Pusuluri ◽  
William Tschudi ◽  
...  

Increasing energy-efficient performance built into today’s servers has created significant opportunities for expanded Information and Communications Technology (ICT) capabilities. Unfortunately the power densities of these systems now challenge the data center cooling systems and have outpaced the ability of many data centers to support them. One of the persistent problems yet to be overcome in the data center space has been the separate worlds of the ICT and Facilities design and operations. This paper covers the implementation of a demonstration project where the integration of these two management systems can be used to gain significant energy savings while improving the operations staff’s visibility to the full data center; both ICT and facilities. The majority of servers have a host of platform information available to the ICT management network. This demonstration project takes the front panel temperature sensor data from the servers and provides that information over to the facilities management system to control the cooling system in the data center. The majority of data centers still use the cooling system return air temperature as the primary control variable to adjust supply air temperature, significantly limiting energy efficiency. Current best practices use a cold aisle temperature sensor to drive the cooling system. But even in this case the sensor is still only a proxy for what really matters; the inlet temperature to the servers. The paper presents a novel control scheme in which the control of the cooling system is split into two control loops to maximize efficiency. The first control loop is the cooling fluid which is driven by the temperature from the physically lower server to ensure the correct supply air temperature. The second control loop is the airflow in the cooling system. A variable speed drive is controlled by a differential temperature from the lower server to the server at the top of the rack. Controlling to this differential temperature will minimize the amount of air moved (and energy to do so) while ensuring no recirculation from the hot aisle. Controlling both of these facilities parameters by the server’s data will allow optimization of the energy used in the cooling system. Challenges with the integration of the ICT management data with the facilities control system are discussed. It is expected that this will be the most fruitful area in improving data center efficiency over the next several years.


Energies ◽  
2020 ◽  
Vol 13 (2) ◽  
pp. 393 ◽  
Author(s):  
Heran Jing ◽  
Zhenhua Quan ◽  
Yaohua Zhao ◽  
Lincheng Wang ◽  
Ruyang Ren ◽  
...  

According to the temperature regulations and high energy consumption of air conditioning (AC) system in data centers (DCs), natural cold energy becomes the focus of energy saving in data center in winter and transition season. A new type of air–water heat exchanger (AWHE) for the indoor side of DCs was designed to use natural cold energy in order to reduce the power consumption of AC. The AWHE applied micro-heat pipe arrays (MHPAs) with serrated fins on its surface to enhance heat transfer. The performance of MHPA-AWHE for different inlet water temperatures, water and air flow rates was investigated, respectively. The results showed that the maximum efficiency of the heat exchanger was 81.4% by using the effectiveness number of transfer units (ε-NTU) method. When the max air flow rate was 3000 m3/h and the water inlet temperature was 5 °C, the maximum heat transfer rate was 9.29 kW. The maximum pressure drop of the air side and water side were 339.8 Pa and 8.86 kPa, respectively. The comprehensive evaluation index j/f1/2 of the MHPA-AWHE increased by 10.8% compared to the plate–fin heat exchanger with louvered fins. The energy saving characteristics of an example DCs in Beijing was analyzed, and when the air flow rate was 2500 m3/h and the number of MHPA-AWHE modules was five, the minimum payback period of the MHPA-AWHE system was 2.3 years, which was the shortest and the most economical recorded. The maximum comprehensive energy efficiency ratio (EER) of the system after the transformation was 21.8, the electric power reduced by 28.3% compared to the system before the transformation, and the control strategy was carried out. The comprehensive performance provides a reference for MHPA-AWHE application in data centers.


Author(s):  
Kourosh Nemati ◽  
Husam A. Alissa ◽  
Mohammad I. Tradat ◽  
Bahgat Sammakia

The constant increase in data center computational and processing requirements has led to increases in the IT equipment power demand and cooling challenges of high-density (HD) data centers. As a solution to this, the hybrid and liquid systems are widely used as part of HD data centers thermal management solutions. This study presents an experimental based investigation and analysis of the transient thermal performance of a stand-alone server cabinet. The total heat load of the cabinet is controllable remotely and a rear door heat exchanger is attached with controllable water flow rate. The cooling performances of two different failure scenarios are investigated. One is in the water chiller and another is in the water pump for the Rear Door Heat eXchanger (RDHX). In addition, the study reports the impact of each scenario on the IT equipment thermal response and on the cabinet outlet temperature using a mobile temperature and velocity mesh (MTVM) experimental tool. Furthermore, this study also addresses and characterizes the heat exchanger cooling performance during both scenarios.


Author(s):  
Cullen Bash ◽  
George Forman

Data center costs for computer power and cooling have been steadily increasing over the past decade. Much work has been done in recent years on understanding how to improve the delivery of cooling resources to IT equipment in data centers, but little attention has been paid to the optimization of heat production by considering the placement of application workload. Because certain physical locations inside the data center are more efficient to cool than others, this suggests that allocating heavy computational workloads onto those servers that are in more efficient places might bring substantial savings. This paper explores this issue by introducing a workload placement metric that considers the cooling efficiency of the environment. Additionally, results from a set of experiments that utilize this metric in a thermally isolated portion of a real data center are described. The results show that the potential savings is substantial and that further work in this area is needed to exploit the savings opportunity.


Sign in / Sign up

Export Citation Format

Share Document