Research on Using the Naturally Cold Air and the Snow for Data Center Air-conditioning, and Humidity Control

2012 ◽  
Vol 132 (6) ◽  
pp. 1050-1059
Author(s):  
Kunikazu Tsuda ◽  
Shunichi Tano ◽  
Junko Ichino
2014 ◽  
Vol 190 (1) ◽  
pp. 45-58
Author(s):  
Kunikazu Tsuda ◽  
Shunichi Tano ◽  
Junko Ichino

Author(s):  
Uschas Chowdhury ◽  
Walter Hendrix ◽  
Thomas Craft ◽  
Willis James ◽  
Ankit Sutaria ◽  
...  

Abstract In a data center, electronic equipment such as server and switches dissipate heat and the corresponding cooling systems contribute to typically 25–35% of total energy consumption. The heat load continues to increase as there is a greater need for miniaturization and convergence. In 2014, data centers in the U.S. consumed an estimated 70 billion kWh, representing about 1.8% of total U.S. electricity consumption. Based on current trend estimates, U.S. data centers are projected to consume approximately 73 billion kWh in 2020 [1]. Many research and strategies are adopted to minimize energy cost. The recommended dry bulb temperature for long-term operation and reliability for air cooling is between 18–27°C and the largest allowable inlet temperature range to operate at is between 5°C and 45°C with American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) enabling much broader allowable zones) [2]. But understanding a proper cooling system is very important especially for thermal management of IT equipment with high heat loads such as 1U or 2U multi-core, high-end servers and blade servers which provide more computing per watt. Many problems like high inlet temperature due to the mixing of hot air with cold air, local hot spots, lower system reliability, increased failure, and downtime may occur. Among many other approaches to managing high-density racks, in-row coolers are used in between racks to provide cold air and minimize local hot spots. This paper describes a computational study being performed by applying in-row coolers for different rack power configuration with and without aisle containment. The power, as well as the number of racks, are varied to study the effect of raised inlet temperature for the IT equipment in a Computational Fluid Dynamics (CFD) model developed in 6SigmaRoom with the help of built-in library items. A comparative analysis is also performed for a typical small-sized non-raised facility to investigate the efficacy and limitations of in-row coolers in thermal management of IT equipment with variation in rack heat load and containment. Several other aspects like a parametric study of variable opening areas of duct between racks and in-row coolers, the variation of operating flow rate and failure scenarios are also studied to find proper flow distribution, uniformity of outlet temperature and predict better performance, energy savings and reliability. The results are presented for general guidance for flexible and quick installation and safe operation of in-row coolers to improve thermal efficiency.


2022 ◽  
Vol 8 ◽  
pp. 1365-1371
Author(s):  
Lele Fang ◽  
Qingshan Xu ◽  
Tang Yin ◽  
Jicheng Fang ◽  
Yusong Shi

2020 ◽  
Vol 160 ◽  
pp. 99-111 ◽  
Author(s):  
Zongwei Han ◽  
Haotian Wei ◽  
Xiaoqing Sun ◽  
Chenguang Bai ◽  
Da Xue ◽  
...  

Author(s):  
Milton Meckler

What does remain a growing concern for many users of Data Centers is their continuing availability following the explosive growth of internet services in recent years, The recent maximizing of Data Center IT virtualization investments has resulted in improving the consolidation of prior (under utilized) server and cabling resources resulting in higher overall facility utilization and IT capacity. It has also resulted in excessive levels of equipment heat release, e.g. high energy (i.e. blade type) servers and telecommunication equipment, that challenge central and distributed air conditioning systems delivering air via raised floor or overhead to rack mounted servers arranged in alternate facing cold and hot isles (in some cases reaching 30 kW/rack or 300 W/ft2) and returning via end of isle or separated room CRAC units, which are often found to fight each other, contributing to excessive energy use. Under those circumstances, hybrid, indirect liquid cooling facilities are often required to augment above referenced air conditioning systems in order to prevent overheating and degradation of mission critical IT equipment to maintain rack mounted subject rack mounted server equipment to continue to operate available within ASHRAE TC 9.9 prescribed task psychometric limits and IT manufacturers specifications, beyond which their operational reliability cannot be assured. Recent interest in new web-based software and secure cloud computing is expected to further accelerate the growth of Data Centers which according to a recent study, the estimated number of U.S. Data Centers in 2006 consumed approximately 61 billion kWh of electricity. Computer servers and supporting power infrastructure for the Internet are estimated to represent 1.5% of all electricity generated which along with aggregated IT and communications, including PC’s in current use have also been estimated to emit 2% of global carbon emissions. Therefore the projected eco-footprint of Data Centers into the future has now become a matter of growing concern. Accordingly our paper will focus on how best to improve the energy utilization of fossil fuels that are used to power Data Centers, the energy efficiency of related auxiliary cooling and power infrastructures, so as to reduce their eco-footprint and GHG emissions to sustainable levels as soon as possible. To this end, we plan to demonstrate significant comparative savings in annual energy use and reduction in associated annual GHG emissions by employing a on-site cogeneration system (in lieu of current reliance on remote electric power generation systems), introducing use of energy efficient outside air (OSA) desiccant assisted pre-conditioners to maintain either Class1, Class 2 and NEBS indoor air dew-points, as needed, when operated with modified existing (sensible only cooling and distributed air conditioning and chiller systems) thereby eliminating need for CRAC integral unit humidity controls while achieving a estimated 60 to 80% (virtualized) reduction in the number servers within a existing (hypothetical post-consolidation) 3.5 MW demand Data Center located in southeastern (and/or southern) U.S., coastal Puerto Rico, or Brazil characterized by three (3) representative microclimates ranging from moderate to high seasonal outside air (OSA) coincident design humidity and temperature.


Author(s):  
Chandrakant D. Patel ◽  
Ratnesh K. Sharma ◽  
Cullen E. Bash ◽  
Monem H. Beitelmal

The information technology industry is in the midst of a transformation to lower the cost of operation through consolidation and better utilization of critical data center resources. Successful consolidation necessitates increasing utilization of capital intensive "always-on" data center infrastructure, and reducing the recurring cost of power. A need exists, therefore for an end to end physical model that can be used to design and manage dense data centers and determine the cost of operating a data center. The chip core to the cooling tower model must capture the power levels and thermo-fluids behavior of chips, systems, aggregation of systems in racks, rows of racks, room flow distribution, air conditioning equipment, hydronics, vapor compression systems, pumps and heat exchangers. Earlier work has outlined the foundation for creation of a "smart" data center through use of flexible cooling resources and a distributed sensing and control system that can provision the cooling resources based on the need. This paper shows a common thermodynamic platform which serves as an evaluation and basis for policy based control engine for such a "smart" data center with much broader reach - from chip core to the cooling tower. Computational Fluid Dynamics modeling is performed to determine the computer room air conditioning utilization for a given distribution of heat load and cooling resources in a production data center. Coefficient of performance (COP) of the computer room air conditioning units, based on the level of utilization, is used with COP of other cooling resources in the stack to determine the COP of the ensemble. The ensemble COP represents an overall measure of the performance of the heat removal stack in a data center.


Sign in / Sign up

Export Citation Format

Share Document