Energy-Efficiency Through the Integration of Information and Communications Technology Management and Facilities Controls

Author(s):  
Michael K. Patterson ◽  
Michael Meakins ◽  
Dennis Nasont ◽  
Prasad Pusuluri ◽  
William Tschudi ◽  
...  

Increasing energy-efficient performance built into today’s servers has created significant opportunities for expanded Information and Communications Technology (ICT) capabilities. Unfortunately the power densities of these systems now challenge the data center cooling systems and have outpaced the ability of many data centers to support them. One of the persistent problems yet to be overcome in the data center space has been the separate worlds of the ICT and Facilities design and operations. This paper covers the implementation of a demonstration project where the integration of these two management systems can be used to gain significant energy savings while improving the operations staff’s visibility to the full data center; both ICT and facilities. The majority of servers have a host of platform information available to the ICT management network. This demonstration project takes the front panel temperature sensor data from the servers and provides that information over to the facilities management system to control the cooling system in the data center. The majority of data centers still use the cooling system return air temperature as the primary control variable to adjust supply air temperature, significantly limiting energy efficiency. Current best practices use a cold aisle temperature sensor to drive the cooling system. But even in this case the sensor is still only a proxy for what really matters; the inlet temperature to the servers. The paper presents a novel control scheme in which the control of the cooling system is split into two control loops to maximize efficiency. The first control loop is the cooling fluid which is driven by the temperature from the physically lower server to ensure the correct supply air temperature. The second control loop is the airflow in the cooling system. A variable speed drive is controlled by a differential temperature from the lower server to the server at the top of the rack. Controlling to this differential temperature will minimize the amount of air moved (and energy to do so) while ensuring no recirculation from the hot aisle. Controlling both of these facilities parameters by the server’s data will allow optimization of the energy used in the cooling system. Challenges with the integration of the ICT management data with the facilities control system are discussed. It is expected that this will be the most fruitful area in improving data center efficiency over the next several years.

Energies ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 4378
Author(s):  
Anastasiia Grishina ◽  
Marta Chinnici ◽  
Ah-Lian Kor ◽  
Eric Rondeau ◽  
Jean-Philippe Georges

The energy efficiency of Data Center (DC) operations heavily relies on a DC ambient temperature as well as its IT and cooling systems performance. A reliable and efficient cooling system is necessary to produce a persistent flow of cold air to cool servers that are subjected to constantly increasing computational load due to the advent of smart cloud-based applications. Consequently, the increased demand for computing power will inadvertently increase server waste heat creation in data centers. To improve a DC thermal profile which could undeniably influence energy efficiency and reliability of IT equipment, it is imperative to explore the thermal characteristics analysis of an IT room. This work encompasses the employment of an unsupervised machine learning technique for uncovering weaknesses of a DC cooling system based on real DC monitoring thermal data. The findings of the analysis result in the identification of areas for thermal management and cooling improvement that further feeds into DC recommendations. With the aim to identify overheated zones in a DC IT room and corresponding servers, we applied analyzed thermal characteristics of the IT room. Experimental dataset includes measurements of ambient air temperature in the hot aisle of the IT room in ENEA Portici research center hosting the CRESCO6 computing cluster. We use machine learning clustering techniques to identify overheated locations and categorize computing nodes based on surrounding air temperature ranges abstracted from the data. This work employs the principles and approaches replicable for the analysis of thermal characteristics of any DC, thereby fostering transferability. This paper demonstrates how best practices and guidelines could be applied for thermal analysis and profiling of a commercial DC based on real thermal monitoring data.


Author(s):  
marta chinnici ◽  
Anastasiia GRISHIna ◽  
Ah-Lian KOR ◽  
Eric Rondeau ◽  
jean philippe georges

Energy efficiency of Data Center (DC) operations heavily relies on IT and cooling systems performance. A reliable and efficient cooling system is necessary to produce a persistent flow of cold air to cool servers that are subjected to constantly increasing computational load due to the advent of IoT- enabled smart systems. Consequently, increased demand for computing power will bring about increased waste heat dissipation in data centers. In order to bring about a DC energy efficiency, it is imperative to explore the thermal characteristics analysis of an IT room (due to waste heat). This work encompasses the employment of an unsupervised machine learning modelling technique for uncovering weaknesses of the DC cooling system based on real DC monitoring thermal data. The findings of the analysis result in the identification of areas for energy efficiency improvement that will feed into DC recommendations. The methodology employed for this research includes statistical analysis of IT room thermal characteristics, and the identification of individual servers that frequently occur in the hotspot zones. A critical analysis has been conducted on available big dataset of ambient air temperature in the hot aisle of ENEA Portici CRESCO6 computing cluster. Clustering techniques have been used for hotspots localization as well as categorization of nodes based on surrounding air temperature ranges. The principles and approaches covered in this work are replicable for energy efficiency evaluation of any DC and thus, foster transferability. This work showcases applicability of best practices and guidelines in the context of a real commercial DC that transcends the set of existing metrics for DC energy efficiency assessment.


Author(s):  
Tianyi Gao ◽  
James Geer ◽  
Bahgat G. Sammakia ◽  
Russell Tipton ◽  
Mark Seymour

Cooling power constitutes a large portion of the total electrical power consumption in data centers. Approximately 25%∼40% of the electricity used within a production data center is consumed by the cooling system. Improving the cooling energy efficiency has attracted a great deal of research attention. Many strategies have been proposed for cutting the data center energy costs. One of the effective strategies for increasing the cooling efficiency is using dynamic thermal management. Another effective strategy is placing cooling devices (heat exchangers) closer to the source of heat. This is the basic design principle of many hybrid cooling systems and liquid cooling systems for data centers. Dynamic thermal management of data centers is a huge challenge, due to the fact that data centers are operated under complex dynamic conditions, even during normal operating conditions. In addition, hybrid cooling systems for data centers introduce additional localized cooling devices, such as in row cooling units and overhead coolers, which significantly increase the complexity of dynamic thermal management. Therefore, it is of paramount importance to characterize the dynamic responses of data centers under variations from different cooling units, such as cooling air flow rate variations. In this study, a detailed computational analysis of an in row cooler based hybrid cooled data center is conducted using a commercially available computational fluid dynamics (CFD) code. A representative CFD model for a raised floor data center with cold aisle-hot aisle arrangement fashion is developed. The hybrid cooling system is designed using perimeter CRAH units and localized in row cooling units. The CRAH unit supplies centralized cooling air to the under floor plenum, and the cooling air enters the cold aisle through perforated tiles. The in row cooling unit is located on the raised floor between the server racks. It supplies the cooling air directly to the cold aisle, and intakes hot air from the back of the racks (hot aisle). Therefore, two different cooling air sources are supplied to the cold aisle, but the ways they are delivered to the cold aisle are different. Several modeling cases are designed to study the transient effects of variations in the flow rates of the two cooling air sources. The server power and the cooling air flow variation combination scenarios are also modeled and studied. The detailed impacts of each modeling case on the rack inlet air temperature and cold aisle air flow distribution are studied. The results presented in this work provide an understanding of the effects of air flow variations on the thermal performance of data centers. The results and corresponding analysis is used for improving the running efficiency of this type of raised floor hybrid data centers using CRAH and IRC units.


Author(s):  
Kamran Nazir ◽  
Naveed Durrani ◽  
Imran Akhtar ◽  
M. Saif Ullah Khalid

Due to high energy demands of data centers and the energy crisis throughout the world, efficient heat transfer in a data center is an active research area. Until now major emphasis lies upon study of air flow rate and temperature profiles for different rack configurations and tile layouts. In current work, we consider different hot aisle (HA) and cold aisle (CA) configurations to study heat transfer phenomenon inside a data center. In raised floor data centers when rows of racks are parallel to each other, in a conventional cooling system, there are equal number of hot and cold aisles for odd number of rows of racks. For even number of rows of racks, whatever configuration of hot/cold aisles is adopted, number of cold aisles is either one greater or one less than number of hot aisles i.e. two cases are possible case A: n(CA) = n(HA) + 1 and case B: n(CA) = n(HA) − 1 where n(CA), n(HA) denotes number of cold and hot aisles respectively. We perform numerical simulations for two (case1) and four (case 2) racks data center. The assumption of constant pressure below plenum reduces the problem domain to above plenum area only. In order to see which configuration provides higher heat transfer across servers, we measure heat transfer across servers on the basis of temperature differences across racks, and in order to validate them, we find mass flow rates on rack outlet. On the basis of results obtained, we conclude that for even numbered rows of rack data center, using more cold aisles than hot aisles provide higher heat transfer across servers. These results provide guidance on the design and layout of a data center.


Author(s):  
Abdlmonem H. Beitelmal ◽  
Drazen Fabris

New servers and data center metrics are introduced to facilitate proper evaluation of data centers power and cooling efficiency. These metrics will be used to help reduce the cost of operation and to provision data centers cooling resources. The most relevant variables for these metrics are identified and they are: the total facility power, the servers’ idle power, the average servers’ utilization, the cooling resources power and the total IT equipment power. These metrics can be used to characterize and classify servers and data centers performance and energy efficiency regardless of their size and location.


Author(s):  
Thomas J. Breen ◽  
Ed J. Walsh ◽  
Jeff Punch ◽  
Amip J. Shah ◽  
Niru Kumari ◽  
...  

As the energy footprint of data centers continues to increase, models that allow for “what-if” simulations of different data center design and management paradigms will be important. Prior work by the authors has described a multi-scale energy efficiency model that allows for evaluating the coefficient of performance of the data center ensemble (COPGrand), and demonstrated the utility of such a model for purposes of choosing operational set-points and evaluating design trade-offs. However, experimental validation of these models poses a challenge because of the complexity involved with tailoring such a model for implementation to legacy data centers, with shared infrastructure and limited control over IT workload. Further, test facilities with dummy heat loads or artificial racks in lieu of IT equipment generally have limited utility in validating end-to-end models owing to the inability of such loads to mimic phenomena such as fan scalability, etc. In this work, we describe the experimental analysis conducted in a special test chamber and data center facility. The chamber, focusing on system level effects, is loaded with an actual IT rack, and a compressor delivers chilled air to the chamber at a preset temperature. By varying the load in the IT rack as well as the air delivery parameters — such as flow rate, supply temperature, etc. — a setup which simulates the system level of a data center is created. Experimental tests within a live data center facility are also conducted where the operating conditions of the cooling infrastructure are monitored — such as fluid temperatures, flow rates, etc. — and can be analyzed to determine effects such as air flow recirculation, heat exchanger performance, etc. Using the experimental data a multi-scale model configuration emulating the data center can be defined. We compare the results from such experimental analysis to a multi-scale energy efficiency model of the data center, and discuss the accuracies as well as inaccuracies within such a model. Difficulties encountered in the experimental work are discussed. The paper concludes by discussing areas for improvement in such modeling and experimental evaluation. Further validation of the complete multi-scale data center energy model is planned.


2019 ◽  
Vol 38 (S1) ◽  
Author(s):  
Brett McDowall ◽  
Samuel Mills

Abstract This paper examines the hosting options for electronic civil registration and vital statistics (CRVS) systems, particularly the use of data centers versus cloud-based solutions. A data center is a facility that houses computer systems and associated hardware and software components, such as network and storage systems, power supplies, environment controls, and security devices. An alternative to using a data center is cloud-based hosting, which is a virtual data center hosted by a public cloud provider. The cloud is used on a pay-as-you-go basis and does not require purchasing and maintaining of hardware for data centers. It also provides more flexibility for continuous innovation in line with evolving information and communications technology.


Energies ◽  
2019 ◽  
Vol 12 (15) ◽  
pp. 2996 ◽  
Author(s):  
Jinkyun Cho ◽  
Beungyong Park ◽  
Yongdae Jeong

If a data center experiences a system outage or fault conditions, it becomes difficult to provide a stable and continuous information technology (IT) service. Therefore, it is critical to design and implement a backup system so that stability can be maintained even in emergency (unforeseen) situations. In this study, an actual 20 MW data center project was analyzed to evaluate the thermal performance of an IT server room during a cooling system outage under six fault conditions. In addition, a method of organizing and systematically managing operational stability and energy efficiency verification was identified for data center construction in accordance with the commissioning process. Up to a chilled water supply temperature of 17 °C and a computer room air handling unit air supply temperature of 24 °C, the temperature of the air flowing into the IT server room fell into the allowable range specified by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers standard (18–27 °C). It was possible to perform allowable operations for approximately 320 s after cooling system outage. Starting at a chilled water supply temperature of 18 °C and an air supply temperature of 25 °C, a rapid temperature increase occurred, which is a serious cause of IT equipment failure. Due to the use of cold aisle containment and designs with relatively high chilled water and air supply temperatures, there is a high possibility that a rapid temperature increase inside an IT server room will occur during a cooling system outage. Thus, the backup system must be activated within 300 s. It is essential to understand the operational characteristics of data centers and design optimal cooling systems to ensure the reliability of high-density data centers. In particular, it is necessary to consider these physical results and to perform an integrated review of the time required for emergency cooling equipment to operate as well as the backup system availability time.


Author(s):  
Chandrakant Patel ◽  
Ratnesh Sharma ◽  
Cullen Bash ◽  
Sven Graupner

Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At $100/MWh, the cooling alone would cost $4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustainability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center’s resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20°C. in New Delhi, India while Phoenix, USA is at 45°C. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo-fluids behavior of a data center in workload placement decision.


Sign in / Sign up

Export Citation Format

Share Document