Reduced Order Thermal Modeling of Data Centers via Distributed Sensor Data

Author(s):  
Emad Samadiani ◽  
Yogendra Joshi ◽  
Hendrik Hamann ◽  
Madhusudan K. Iyengar ◽  
Steven Kamalsy ◽  
...  

In this paper, an effective and computationally efficient Proper Orthogonal Decomposition (POD) based reduced order modeling approach is presented, which utilizes selected sets of observed thermal sensor data inside the data centers to help predict the data center temperature field as a function of the air flow rates of Computer Room Air Conditioning (CRAC) units. The approach is demonstrated through application to an operational data center of 102.2 m2 (1,100 square feet) with a hot and cold aisle arrangement of racks cooled by one CRAC unit. While the thermal data throughout the facility can be collected in about 30 minutes using a 3D temperature mapping tool, the POD method is able to generate temperature field throughout the data center in less than 2 seconds on a high end desktop PC. Comparing the obtained POD temperature fields with the experimentally measured data for two different values of CRAC flow rates shows that the method can predict the temperature field with the average error of 0.68 °C or 3.2%.

2012 ◽  
Vol 134 (4) ◽  
Author(s):  
Emad Samadiani ◽  
Yogendra Joshi ◽  
Hendrik Hamann ◽  
Madhusudan K. Iyengar ◽  
Steven Kamalsy ◽  
...  

In this paper, an effective and computationally efficient proper orthogonal decomposition (POD) based reduced order modeling approach is presented, which utilizes selected sets of observed thermal sensor data inside the data centers to help predict the data center temperature field as a function of the air flow rates of computer room air conditioning (CRAC) units. The approach is demonstrated through application to an operational data center of 102.2 m2 (1100 square feet) with a hot and cold aisle arrangement of racks cooled by one CRAC unit. While the thermal data throughout the facility can be collected in about 30 min using a 3D temperature mapping tool, the POD method is able to generate temperature field throughout the data center in less than 2 s on a high end desktop personal computer (PC). Comparing the obtained POD temperature fields with the experimentally measured data for two different values of CRAC flow rates shows that the method can predict the temperature field with the average error of 0.68 °C or 3.2%. The maximum local error is around 8 °C, but the total number of points where the local error is larger than 1 °C, is only ∼6% of the total domain points.


Author(s):  
Xuanhang (Simon) Zhang ◽  
Christopher M. Healey ◽  
Zachary R. Sheffer ◽  
James W. VanGilder

The growing demand for data center facilities has made intelligently managed data center operations necessary. For temperature measurement and thermal management, a common practice is to install a limited number of temperature sensors evenly distributed throughout the room. However, data center operators rarely fully equip facilities with temperature sensors due to their cost, complexity, and maintenance requirements, creating vacancies in the data center temperature and cooling picture. The local nature of sensor data can also be misinterpreted and misused. Without novel methods to interpret and visualize temperatures obtained by prediction or measurement, data center operators cannot easily identify urgent local cooling issues or quickly examine the temperature at other location. This paper presents methods to predict a full three-dimensional temperature field in data centers from a limited number of measurement points. Several different statistical interpolating schemes are discussed. We also validate the interpolated temperature fields against benchmark data from Computation Fluid Dynamics (CFD) and show good agreement.


2016 ◽  
Vol 56 (4) ◽  
pp. 301-305
Author(s):  
Jan Novotný ◽  
Jiří Nožička

The aim of this paper is to present a design and a development of a heat simulator, which will be used for a flow research in data centers. The designed heat simulator is based on an ideological basis of four-processor 1U Supermicro server. The designed heat simulator enables to control the flow and heat output within the range of 10–100 %. The paper covers also the results of testing measurements of mass flow rates and heat flow rates in the simulator. The flow field at the outlet of the server was measured by the stereo PIV method. The heat flow rate was determined, based on measuring the temperature field at the inlet and outlet of the simulator and known mass flow rate.


Author(s):  
K. Fouladi ◽  
A. P. Wemhoff ◽  
L. Silva-Llanca ◽  
A. Ortega

Much of the energy use by data centers is attributed to the energy needed to cool the data centers. Thus, improving the cooling efficiency and thermal management of data centers can translate to direct and significant economic benefits. However, data centers are complex systems containing a significant number of components or sub-systems (e.g., servers, fans, pumps, and heat exchangers) that must be considered in any synergistic data center thermal efficiency optimization effort. The Villanova Thermodynamic Analysis of Systems (VTAS) is a flow network tool for performance prediction and design optimization of data centers. VTAS models the thermodynamics, fluid mechanics, and heat transfer inherent to an entire data center system, including contributions by individual servers, the data center airspace, and the HVAC components. VTAS can be employed to identify the optimal cooling strategy among various alternatives by computing the exergy destruction of the overall data center system and the various components in the system for each alternative. Exergy or “available energy” has been used to identify components and wasteful practices that contribute significantly in cooling inefficiency of data centers including room air recirculation — premature mixing of hot and cold air streams in a data center. Flow network models are inadequate in accurately predicting the magnitude of airflow exergy destruction due to simplifying assumptions and the three-dimensional nature of the flow pattern in the room. On the other hand, CFD simulations are time consuming, making them impractical for iterative-based design optimization approaches. In this paper we demonstrate a hybrid strategy, in which a proper orthogonal decomposition (POD) based airflow modeling approach developed from CFD simulation data is implemented in VTAS for predicting the room airflow exergy destruction. The reduced order POD tool in VTAS provides higher accuracy than 1-D flow network models and is computationally more efficient than 3-D CFD simulations. The present VTAS – POD tool has been applied to a data center cell to illustrate the use of exergy destruction minimization as an objective function for data center thermal efficiency optimization.


Author(s):  
Sami A. Alkharabsheh ◽  
Bharathkrishnan Muralidharan ◽  
Mahmoud Ibrahim ◽  
Saurabh K. Shrivastava ◽  
Bahgat G. Sammakia

This paper presents the results of an experimentally validated Computational Fluid Dynamics (CFD) model for a data center with fully implemented fan curves on both the servers and the Computer Room Air Conditioner (CRAC). Open and contained cold aisle systems are considered experimentally and numerically. This work is divided into open (uncontained) cold aisle system calibration and validation, and fully contained cold aisle system calibration and leakage characterization. In the open system, the CRAC unit is calibrated using the manufacturer fan curve. Tiles flow measurements are used to calibrate the floor leakage. The fan curves of the load banks are generated experimentally. A full physics based model of the system is validated with two different CRAC fan speeds. The results showed a very good agreement with the tile flow measurements, with an approximate average error of 5%, indicating that the average model prediction of the tile flow is five percent lower that the measured values. In the fully contained cold aisle system, a detailed containment CFD model based on experimental measurements is developed. The model is validated by comparing the flow rate through the perforated floor tiles with the experimental measurements. The CFD results are in a good agreement with the experimental data. The average error is about 6.7%. Temperature measurements are used to calibrate other sources of containment and racks leaks including mounting rails and clearance between racks. The temperature measurements and the CFD results agree well with average error less than 2%. Detailed and equivalent modeling methods for the floor and containment system are investigated. It is found that the simple equivalent models are able to predict the flow rates however they did not succeed in providing detailed fluid flow information. While the detailed models succeeded in explaining the physical phenomena and predicting the flow rates with noticeable tradeoffs regarding the computational time. Important conclusions can be drawn from this study. In order to accurately model the containment system, both the CRAC and the load banks fan curves should be simulated in the numerical model. Unavoidable racks and containment leaks could cause inlet temperature increase even if the cold aisle is overprovisioned with cold air. It is also noted that heat conduction through the floor tiles causes a slight increase the inlet temperature of the cold aisles. Finally, it is noteworthy that using detailed modeling is necessary to understand the details of the thermal systems, however simpler and faster to compute equivalent models can be used in extended optimization studies that show relative rankings of different designs.


Author(s):  
Kailash C. Karki ◽  
Suhas V. Patankar ◽  
Amir Radmehr

In raised-floor data centers, the airflow rates through the perforated tiles must meet the cooling requirements of the computer servers placed next to the tiles. The data centers house a wide range of equipment, and the heat load pattern on the floor can be quite arbitrary and changes as the data center evolves. To achieve optimum utilization of the floor space and the flexibility for rearrangement and retrofitting, the designers and managers of data centers must be able to modify the airflow rates through the perforated tiles. The airflow rates through the perforated tiles are governed primarily by the pressure distribution under the raised floor. Thus, the key to modifying the flow rates is to influence the flow field in the plenum. This paper discusses a number of techniques that can be used for controlling airflow distribution. These techniques involve changing the plenum height and open area of perforated tiles, and installing thin (solid and perforated) partitions in the plenum. A number of case studies, using a mathematical model, are presented to demonstrate the effectiveness of these techniques.


Author(s):  
Long Phan ◽  
Cheng-Xian Lin ◽  
Mackenson Telusma

Energy consumption and thermal management have become key challenges in the design of large-scale data centers, where perforated tiles are used together with cold and hot aisles configuration to improve thermal management. Although full-field simulations using computational fluid dynamics and heat transfer (CFD/HT) tools can be applied to predict the flow and temperature fields inside data centers, their running time remains the biggest challenge to most modelers. In this paper, response surface methodology based on radial basis function is used to drastically reduce the running time while preserving the accuracy of the model. Response surface method with data interpolation allows the study of many design parameters of data center model more feasible and economical in terms of modeling time. Three scenarios of response surface construction are investigated (5%, 10%, and 20%). The method shows very good agreement with the simulation results obtained from CFD/HT model as in the case of 20% of the original CFD data points used for response surface training. Error analysis is carried out to quantify the error associated with each scenario. Case 20% shows superb accuracy as compared to others. With only 2.12 × 104 in mean relative error and R2 = 0.970, the case can capture most of the aspects of the original CFD model.


Author(s):  
Mullaivendhan Varadharasan ◽  
Dereje Agonafer ◽  
Ahmed Al Khazraji ◽  
Jimil Shah ◽  
Ashwin Siddarth ◽  
...  

Direct evaporative cooling (DEC) is widely used in the data center cooling units to maintain the air condition inside the data centers. Often, the flow rate of the water over the wet cooling media in this DEC process is frequently varied to maintain the air condition inside the data centers based on changing weather conditions. Though the adopted method helps to control the air temperature and relative humidity, the scale formation occurs on the surface of wet cooling media due to the frequent variation of the flow rate and deposition of minerals present in the water at low flow rate values, which increases the total weight of the wet cooling media and it can lead to a wet cooling media collapse. In this paper an alternative and simplified method to control the air condition is presented. A vertically split wet cooling media is designed and tested in a commercial CFD tool to analyze the temperature and relative humidity parameters of the inlet and outlet air to the wet cooling media, in this approach the sections of the media can either be completely wet or completely dry which can potentially avoid the scale formation on the surface of the wet cooling media. In addition to the temperature and relative humidity parameters against the air flow rates, the pressure drop and cooling efficiency values for varied air flow rates are studied. The vertically split wet cooling media configurations are achieved by sectioning the media in to equal and unequal sections. In the equal configuration, media has been tested for 0%, 50% and 100% wetting conditions, and in the unequal configuration, media has been tested for 0%, 33%, 66% and 100% wetting conditions. The test results are used to emphasis the advantage of this staged wetting method and gives a possible solution to the scale formation problem on the wet cooling media during the direct evaporative cooling process in the data center.


Author(s):  
Huijing Jiang ◽  
Xinwei Deng ◽  
Vanessa Lopez ◽  
Hendrik Hamann

Energy consumption of data center has increased dramatically due to the massive computing demands driven from every sector of the economy. Hence, data center energy management has become very important for operating data centers within environmental standards while achieving low energy cost. In order to advance the understanding of thermal management in data centers, relevant environmental information such as temperature, humidity and air quality are gathered through a network of real-time sensors or simulated via sophisticated physical models (e.g. computational fluid dynamics models). However, sensor readings of environmental parameters are collected only at sparse locations and thus cannot provide a detailed map of temperature distribution for the entire data center. While the physics models yield high resolution temperature maps, it is often not feasible, due to computational complexity of these models, to run them in real-time, which is ideally required for optimum data center operation and management. In this work, we propose a novel statistical modeling approach to updating physical model outputs in real-time and providing automatic scheduling for re-computing physical model outputs. The proposed method dynamically corrects the discrepancy between a steady-state output of the physical model and real-time thermal sensor data. We show that the proposed method can provide valuable information for data center energy management such as real-time high-resolution thermal maps. Moreover, it can efficiently detect systematic changes in a data center thermal environment, and automatically schedule physical models to be re-executed whenever significant changes are detected.


2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Dustin W. Demetriou ◽  
H. Ezzat Khalifa

This paper introduces a methodology for developing a reduced order model, using proper orthogonal decomposition (POD), to predict the IT rack's inlet temperature distribution within a raised floor air-cooled data center. The method used in this paper uses a limited set of computational fluid dynamics data at different useful IT levels and tile airflow fractions. The model was able to reconstruct these datasets to with 0.16 °C rms error and interpolate successfully for alternative configurations that were not included in the original dataset. The reduced order model can produce the temperature distribution in the data center in a fraction of a second on a standard personal computer. Several practical IT load placement options in open-aisle, air-cooled data centers, based on either geometrical traits of the data center, a prior physics-based knowledge of the airflow and temperature patterns or measurements that are easily obtainable during operation, are considered. The outcome of this work is the development of a robust set of guidelines that facilitate the energy efficient placement of the IT load amongst the operating servers in the data center. This work found that a robust approach was to use real-time temperature measurements at the inlet of the racks to remove the unnecessary IT load from the servers with the warmest inlet temperature. This strategy shows superior performance to the other strategies studied. The study considered the holistic optimization of the data center and cooling infrastructure for a range of data center IT utilization levels. The results showed that allowing for significant reductions in the supply air flow rate proved superior to providing a higher supply air temperature to meet the IT equipment's inlet air temperature constraint.


Sign in / Sign up

Export Citation Format

Share Document