Consideration for Running Data Center at High Temperatures and Using Free Air Cooling

Author(s):  
Yongzhan He ◽  
Guofeng Chen ◽  
Jiajun Zhang ◽  
Tianyu Zhou ◽  
Tao Liu ◽  
...  

The advent of the big data era, the rapid development of mobile internet, and the rising demand of cloud computing services require increasingly more compute capability from their data center. This compute increase will most likely come from higher rack and room power densities or even construction of new Internet data centers. But an increase in a data center’s business-critical IT equipment (servers, hubs, routers, wiring patch panels, and other network appliances), not to mention the infrastructure needed to keep these devices alive and protected, encroaches on another IT goal: to reduce long-term energy usage. Large Internet Data Centers are looking at every possible way to reduce the cooling cost and improve efficiency. One of the emerging trends in the industry is to move to higher ambient data center operation and use air side economizers. However, these two trends can have significant implications for corrosion risk in data centers. The prevailing practice surrounding the data centers has often been “The colder, the better.” However, some leading server manufacturers and data center efficiency experts share the opinion that data centers can run far hotter than they do today without sacrificing uptime, and with a huge savings in both cooling related costs and CO2 emissions. Why do we need to increase the temperatures? To cool data center requires huge refrigeration system which is energy hog and also cost of cooling infrastructure, maintenance cost and operation cost are heavy cost burden. Ahuja et al [1] studied cooling path management in data center at typical operating temperature as well as higher ambient operating temperatures. High Temperatures and Corrosion Resistance technology will reduce the refrigeration output and how this innovation will open up new direction in data centers. Note that, HTA is not to say that the higher the better. Before embracing HTA two keys points need to be addressed and understood. Firstly, server stability along with optimal temperature from data center perspective. Secondly, corrosion resistant technology. With Fresh air cooling the server has to bear with the seasons and diurnal variation of temperatures and these can be over 35 degree C, therefore to some extent, we have to say, HTA design is the premise of corrosion resistant design. In this paper, we present methods to realize precise HTA operation along with corrosive resistant technology. This is achieved through an orchestrated collaboration between the IT and cooling infrastructures.

2020 ◽  
Vol 142 (2) ◽  
Author(s):  
Oluwaseun Awe ◽  
Jimil M. Shah ◽  
Dereje Agonafer ◽  
Prabjit Singh ◽  
Naveen Kannan ◽  
...  

Abstract Airside economizers lower the operating cost of data centers by reducing or eliminating mechanical cooling. It, however, increases the risk of reliability degradation of information technology (IT) equipment due to contaminants. IT Equipment manufacturers have tested equipment performance and guarantee the reliability of their equipment in environments within ISA 71.04-2013 severity level G1 and the ASHRAE recommended temperature-relative humidity (RH) envelope. IT Equipment manufacturers require data center operators to meet all the specified conditions consistently before fulfilling warranty on equipment failure. To determine the reliability of electronic hardware in higher severity conditions, field data obtained from real data centers are required. In this study, a corrosion classification coupon experiment as per ISA 71.04-2013 was performed to determine the severity level of a research data center (RDC) located in an industrial area of hot and humid Dallas. The temperature-RH excursions were analyzed based on time series and weather data bin analysis using trend data for the duration of operation. After some period, a failure was recorded on two power distribution units (PDUs) located in the hot aisle. The damaged hardware and other hardware were evaluated, and cumulative corrosion damage study was carried out. The hypothetical estimation of the end of life of components is provided to determine free air-cooling hours for the site. There was no failure of even a single server operated with fresh air-cooling shows that using evaporative/free air cooling is not detrimental to IT equipment reliability. This study, however, must be repeated in other geographical locations to determine if the contamination effect is location dependent.


Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation IT performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true lifecycle cost of a water cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership for water cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers and direct water cooling via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influence the total cost of ownership of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


Author(s):  
Veerendra Mulay ◽  
Saket Karajgikar ◽  
Dereje Agonafer ◽  
Roger Schmidt ◽  
Madhusudan Iyengar

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in these situations. A parametric study of such solution is presented in this paper. A representative data center with 40 racks is modeled using commercially available CFD code. The variation in rack inlet temperature due to tile openings, underfloor plenum depths is reported.


Author(s):  
Magnus K. Herrlin ◽  
Michael K. Patterson

Increased Information and Communications Technology (ICT) capability and improved energy-efficiency of today’s server platforms have created opportunities for the data center operator. However, these platforms also test the ability of many data center cooling systems. New design considerations are necessary to effectively cool high-density data centers. Challenges exist in both capital costs and operational costs in the thermal management of ICT equipment. This paper details how air cooling can be used to address both challenges to provide a low Total Cost of Ownership (TCO) and a highly energy-efficient design at high heat densities. We consider trends in heat generation from servers and how the resulting densities can be effectively cooled. A number of key factors are reviewed and appropriate design considerations developed to air cool 2000 W/ft2 (21,500 W/m2). Although there are requirements for greater engineering, such data centers can be built with current technology, hardware, and best practices. The density limitations are shown primarily from an airflow management and cooling system controls perspective. Computational Fluid Dynamics (CFD) modeling is discussed as a key part of the analysis allowing high-density designs to be successfully implemented. Well-engineered airflow management systems and control systems designed to minimize airflow by preventing mixing of cold and hot airflows allow high heat densities. Energy efficiency is gained by treating the whole equipment room as part of the airflow management strategy, making use of the extended environmental ranges now recommended and implementing air-side air economizers.


Author(s):  
Levente J. Klein ◽  
Sergio A. Bermudez ◽  
Fernando J. Marianno ◽  
Hendrik F. Hamann ◽  
Prabjit Singh

Many data center operators are considering the option to convert from mechanical to free air cooling to improve energy efficiency. The main advantage of free air cooling is the elimination of chiller and Air Conditioning Unit operation when outdoor temperature falls below the data center temperature setpoint. Accidental introduction of gaseous pollutants in the data center along the fresh air and potential latency in response of control infrastructure to extreme events are some of the main concerns for adopting outside air cooling in data centers. Recent developments of ultra-high sensitivity corrosion sensors enable the real time monitoring of air quality and thus allow a better understanding of how airflow, relative humidity, and temperature fluctuations affect corrosion rates. Both the sensitivity of sensors and wireless networks ability to detect and react rapidly to any contamination event make them reliable tools to prevent corrosion related failures. A feasibility study is presented for eight legacy data centers that are evaluated to implement free air cooling.


2008 ◽  
Vol 130 (4) ◽  
Author(s):  
Emad Samadiani ◽  
Yogendra Joshi ◽  
Farrokh Mistree

In the near future, electronic cabinets of data centers will house high performance chips with heat fluxes approaching 100 W/cm2 and associated high volumetric heat generation rates. With the power trends in the electronic cabinets indicating 60 kW cabinets in the near future, the current cooling systems of data centers will be insufficient and new solutions will need to be explored. Accordingly, the key issue that merits investigation is identifying and satisfying the needed specifications of the new thermal solutions, considering the design environment of the next generation data centers. Anchoring our work in the open engineering system paradigm, we identify the requirements of the future thermal solutions and explore various design specifications of an ideally open thermal solution for a next generation data center. To approach an open cooling system for the future data centers, the concept of a thermal solution centered on the multiscale (multilevel) nature of the data centers is discussed. The potential of this solution to be open, along with its theoretical advantages compared with the typical air-cooling solutions, is demonstrated through some scenarios. The realization problems and the future research needs are highlighted to achieve a practical open multiscale thermal solution in data centers. Such solution is believed to be both effective and efficient for the next generation data centers.


2016 ◽  
Vol 138 (1) ◽  
Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation information technology (IT) performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance, and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true life cycle cost of a water-cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership (TCO) for water-cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers, and direct water cooling (DWC) via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influences the TCO of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


Author(s):  
Veerendra Mulay ◽  
Dereje Agonafer ◽  
Roger Schmidt

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. Thermal management of such dense data center clusters using liquid cooling is presented.


Author(s):  
Jimil M. Shah ◽  
Oluwaseun Awe ◽  
Pavan Agarwal ◽  
Iziren Akhigbe ◽  
Dereje Agonafer ◽  
...  

Deployment of air-side economizers in data centers is rapidly gaining acceptance to reduce the cost of energy by reducing the hours of operation of CRAC units. Use of air-side economizers has the associated risk of introducing gaseous and particulate contamination into data centers, thus, degrading the reliability of Information Technology (IT) equipment. Sulfur-bearing gaseous contamination is of concern because it attacks the copper and silver metallization of the electronic components causing electrical opens and/or shorts. Particulate contamination with low deliquescence relative humidity is of concern because it becomes wet and therefore electrically conductive under normal data center relative humidity conditions. IT equipment manufacturers guarantee the reliability of their equipment operating in environment within ISA 71.04-2013 severity level G1 and within the ASHRAE recommended temperature-relative humidity envelope. The challenge is to determine the reliability degrading effect of contamination severity levels higher than G1 and the temperature and humidity allowable ranges A1–A3 well outside the recommended range. This paper is a first attempt at addressing this challenge by studying the cumulative corrosion damage to IT equipment operated in an experimental data center located in Dallas, known to have contaminated air with ISA 71.04-2013 severity level G2. The data center is cooled using an air-side economizer. This study serves several purposes including: the correlation of equipment reliability to levels of airborne corrosive contaminants and the study of the degree of reliability degradation when the equipment is operated, outside the recommended envelope, in the allowable temperature-relative humidity range in geographies with high levels of gaseous and particulate contamination. The operating and external conditions of a modular data center, located in a Dallas industrial area, using air-side economizer is described. The reliability degradation of servers exposed to outside air via an airside economizer was determined qualitatively examining the corrosion of components in the servers and comparing the results to the corrosion of components in a non-operating server stored in a protective environment. The corrosion-related reliability of the servers over almost the life of the product was related to continuous temperature and relative humidity for the duration of the experiment. This work provides guidance for data center administration for similar environment. From an industry perspective, it should be noted that in the four years of operation in the hot and humid Dallas climate using only evaporative cooling or fresh air cooling, we have not seen a single server failure in our research pod. That performance should highlight an opportunity for significant energy savings for data center operators in a much broader geographic area than currently envisioned with evaporative cooling.


Author(s):  
Seungho Mok ◽  
Yogendra K. Joshi ◽  
Satish Kumar ◽  
Ronald R. Hutchins

This study focuses on developing computational models for hybrid or liquid cooled data centers that may reutilize waste heat. A data center with 17 fully populated racks with IBM LS20 blade servers, which consumes 408 kW at the maximum load, is considered. The hybrid cooling system uses a liquid to remove the heat produced by high power components, while the remaining low power components are cooled by air. The paper presents three hybrid cooling scenarios. For the first two cases, air is cooled by direct expansion (DX) cooling system with air-side economizer. Unlike the cooling air, two different approaches for cooling water are investigated: air-cooled chiller and ground water through liquid-to-liquid heat exchanger. Waste heat re-use for pre-heating building water in co-located facilities is also investigated for the second scenario. In addition to the hybrid cooling models, a fully liquid cooling system is modeled as the third scenario for comparison with hybrid cooling systems. By linking the computational models, power usage effectiveness (PUE) for all scenarios can be calculated for selected geographical locations and data center parameters. The paper also presents detailed analyses of the cooling components considered and comparisons of the PUE results.


Sign in / Sign up

Export Citation Format

Share Document