Physical Models in Data Center Airflow Simulations

Author(s):  
Jeffrey D. Rambo ◽  
Yogendra K. Joshi

The trend of increasing functionality of electronics with a reduction in size has caused a rapid increase in the volumetric heat generation of today’s equipment. This problem is compounded by the vertical stacking of such components in tall enclosures, called racks. The organization of these racks into large infrastructure facilities, or data centers, generates enough heat to require a room-level cooling strategy. The total power dissipated in current data centers can be as large as several MW. Since all the heat generated must be removed, a systematic thermal management methodology is required to ensure efficient, reliable and safe operating conditions. The mathematical description of the airflow and heat transfer characteristics involves a range of length scales, all of which are infeasible to approach simultaneously. In this study, a modeling framework for data centers is investigated, with emphasis on the physical models employed in numerical simulations. Computational fluid dynamics (CFD) models are presented to develop a unit cell, or a minimum sized model which is representative of these facilities. A unit cell architecture is a useful design tool for the evaluation of tomorrow’s cooling strategies. A premium on floor space may result in oddly shaped facilities, so a need exists for a common basis of comparison. The flow patterns inside a data center typically fall into the regime of turbulent mixed convection. The choice of turbulence model employed in a Reynolds-averaged Navier-Stokes (RANS) type simulation is examined using a commercial code. Comparisons are made with indoor airflow simulations of office spaces and auditoria because of the similarity of office spaces and auditoria because of the similarity in velocities and length scales. Results shows up to 20% variation in temperature predictions can occur between various commercially-implemented turbulence models.

Author(s):  
Zhihang Song ◽  
Bruce T. Murray ◽  
Bahgat Sammakia

The integration of a simulation-based Artificial Neural Network (ANN) with a Genetic Algorithm (GA) has been explored as a real-time design tool for data center thermal management. The computation time for the ANN-GA approach is significantly smaller compared to a fully CFD-based optimization methodology for predicting data center operating conditions. However, difficulties remain when applying the ANN model for predicting operating conditions for configurations outside of the geometry used for the training set. One potential remedy is to partition the room layout into a finite number of characteristic zones, for which the ANN-GA model readily applies. Here, a multiple hot aisle/cold aisle data center configuration was analyzed using the commercial software FloTHERM. The CFD results are used to characterize the flow rates at the inter-zonal partitions. Based on specific reduced subsets of desired treatment quantities from the CFD results, such as CRAC and server rack air flow rates, the approach was applied for two different CRAC configurations and various levels of CRAC and server rack flow rates. Utilizing the compact inter-zonal boundary conditions, good agreement for the airflow and temperature distributions is achieved between predictions from the CFD computations for the entire room configuration and the reduced order zone-level model for different operating conditions and room layouts.


Author(s):  
Thomas J. Breen ◽  
Ed J. Walsh ◽  
Jeff Punch ◽  
Amip J. Shah ◽  
Niru Kumari ◽  
...  

As the energy footprint of data centers continues to increase, models that allow for “what-if” simulations of different data center design and management paradigms will be important. Prior work by the authors has described a multi-scale energy efficiency model that allows for evaluating the coefficient of performance of the data center ensemble (COPGrand), and demonstrated the utility of such a model for purposes of choosing operational set-points and evaluating design trade-offs. However, experimental validation of these models poses a challenge because of the complexity involved with tailoring such a model for implementation to legacy data centers, with shared infrastructure and limited control over IT workload. Further, test facilities with dummy heat loads or artificial racks in lieu of IT equipment generally have limited utility in validating end-to-end models owing to the inability of such loads to mimic phenomena such as fan scalability, etc. In this work, we describe the experimental analysis conducted in a special test chamber and data center facility. The chamber, focusing on system level effects, is loaded with an actual IT rack, and a compressor delivers chilled air to the chamber at a preset temperature. By varying the load in the IT rack as well as the air delivery parameters — such as flow rate, supply temperature, etc. — a setup which simulates the system level of a data center is created. Experimental tests within a live data center facility are also conducted where the operating conditions of the cooling infrastructure are monitored — such as fluid temperatures, flow rates, etc. — and can be analyzed to determine effects such as air flow recirculation, heat exchanger performance, etc. Using the experimental data a multi-scale model configuration emulating the data center can be defined. We compare the results from such experimental analysis to a multi-scale energy efficiency model of the data center, and discuss the accuracies as well as inaccuracies within such a model. Difficulties encountered in the experimental work are discussed. The paper concludes by discussing areas for improvement in such modeling and experimental evaluation. Further validation of the complete multi-scale data center energy model is planned.


Energies ◽  
2020 ◽  
Vol 13 (22) ◽  
pp. 6147
Author(s):  
Jinkyun Cho ◽  
Jesang Woo ◽  
Beungyong Park ◽  
Taesub Lim

Removing heat from high-density information technology (IT) equipment is essential for data centers. Maintaining the proper operating environment for IT equipment can be expensive. Rising energy cost and energy consumption has prompted data centers to consider hot aisle and cold aisle containment strategies, which can improve the energy efficiency and maintain the recommended level of inlet air temperature to IT equipment. It can also resolve hot spots in traditional uncontained data centers to some degree. This study analyzes the IT environment of the hot aisle containment (HAC) system, which has been considered an essential solution for high-density data centers. The thermal performance was analyzed for an IT server room with HAC in a reference data center. Computational fluid dynamics analysis was conducted to compare the operating performances of the cooling air distribution systems applied to the raised and hard floors and to examine the difference in the IT environment between the server rooms. Regarding operating conditions, the thermal performances in a state wherein the cooling system operated normally and another wherein one unit had failed were compared. The thermal performance of each alternative was evaluated by comparing the temperature distribution, airflow distribution, inlet air temperatures of the server racks, and recirculation ratio from the outlet to the inlet. In conclusion, the HAC system with a raised floor has higher cooling efficiency than that with a hard floor. The HAC with a raised floor over a hard floor can improve the air distribution efficiency by 28%. This corresponds to 40% reduction in the recirculation ratio for more than 20% of the normal cooling conditions. The main contribution of this paper is that it realistically implements the effectiveness of the existing theoretical comparison of the HAC system by developing an accurate numerical model of a data center with a high-density fifth-generation (5G) environment and applying the operating conditions.


Author(s):  
Michael M. Toulouse ◽  
David Lettieri ◽  
Van P. Carey ◽  
Cullen E. Bash

This paper summarizes the comparison of predictions by a compact model of air flow and transport in data centers to temperature measurements of an operational data center. The simplified model and code package, referred to as COMPACT (Compact Model of Potential Flow and Convective Transport), is intended as an alternative to the use of time-intensive full CFD thermofluidic models as a first-order design tool, as well as a potential improvement to plant-based controllers. COMPACT is based on potential flow and combined with an application of convective energy equations, using sparse matrix solvers to seek flow and temperature solutions. Full-room solutions can be generated in 15 seconds on a commercially available laptop, and an accompanying graphical user interface has also been developed to allow quick configuration of data center designs and analysis of flow and temperature results. Experiments for validation of the model were conducted at the HP Labs data center in Palo Alto, CA, which is in a traditional configuration consisting of inlet floor tiles feeding cold air between two rows of multiple server racks. Subsequently, air exits either through ceiling tiles or direct room-return to CRAC units located on the side of the room. Temperatures were recorded at multiple points along entering and exiting flow faces within the room, as well as at various points in cold and hot aisles, and are presented and compared to model predictions to assess their accuracy. Areas of greater and lesser accuracy are analyzed and presented, in addition to conclusions as to the strengths and weaknesses of the model. For some cases, the average predicted temperature along in-flowing rack faces was within one degree of the average measured temperature. However, the differences in temperature are not evenly distributed. The most pronounced variations between the model and room measurements were located in areas above server racks where recirculation was shown to most likely occur. In these areas, the predicted temperature was higher than experimental values; this can likely be attributed to the absence of buoyancy effects in the simplified potential flow model. Adaptations of the model and its configuration standards for more accurate temperature distributions are proposed, as well as investigations into the effect on temperature comparisons to idealized model output by unaccounted heat sources or flow phenomena.


Author(s):  
Jimil M. Shah ◽  
Roshan Anand ◽  
Satyam Saini ◽  
Rawhan Cyriac ◽  
Dereje Agonafer ◽  
...  

Abstract A remarkable amount of data center energy is consumed in eliminating the heat generated by the IT equipment to maintain and ensure safe operating conditions and optimum performance. The installation of Airside Economizers, while very energy efficient, bears the risk of particulate contamination in data centers, hence, deteriorating the reliability of IT equipment. When RH in data centers exceeds the deliquescent relative humidity (DRH) of salts or accumulated particulate matter, it absorbs moisture, becomes wet and subsequently leads to electrical short circuiting because of degraded surface insulation resistance between the closely spaced features on printed circuit boards. Another concern with this type of failure is the absence of evidence that hinders the process of evaluation and rectification. Therefore, it is imperative to develop a practical test method to determine the DRH value of the accumulated particulate matter found on PCBs (Printed Circuit Boards). This research is a first attempt to develop an experimental technique to measure the DRH of dust particles by logging the leakage current versus RH% (Relative Humidity percentage) for the particulate matter dispensed on an interdigitated comb coupon. To validate this methodology, the DRH of pure salts like MgCl2, NH4NO3 and NaCl is determined and their results are then compared with their published values. This methodology was therefore implemented to help lay a modus operandi of establishing the limiting value or an effective relative humidity envelope to be maintained at a real-world data center facility situated in Dallas industrial area for its continuous and reliable operation.


Author(s):  
Jeffrey D. Rambo ◽  
Yogendra K. Joshi

Data center facilities, which house thousands of servers, storage devices and computing hardware, arranged in 2 meter high racks are providing many thermal challenges. Each rack can dissipate 10–15 kW, and with facilities as large as tens of thousands of square feet, the net power dissipated is typically on the order of several MW. The cost to power these facilities alone can be millions of dollars a year, with the cost to provide adequate cooling not far behind. Significant savings can be realized for the end user by improved design methodology of these high power density data centers. The fundamental need for improved characterization is motivated by inadequacies of simple energy balances to identify local ‘hot spots’ and ultimately provide a reliable modeling framework by which the data centers of the future can be designed. Recent attempts in computational fluid dynamics (CFD) modeling of data centers have been based around a simple rack model, either as a uniform heat generator or specified temperature rise across the rack. This desensitizes the solution to variations of heat load and corresponding flow rate needed to cool the servers throughout the rack. Heat generated at the smaller scales (the chip level) produces changes in the larger length scales of the data center. Accurate simulations of these facilities should attempt to resolve the range of length scales present. In this paper, a multi-scale model where each rack is subdivided into a series of sub-models to better mimic the behavior of individual servers inside the data center is proposed. A Reynolds-averaged Navier-Stokes CFD model of a 110 m2 (1,200 ft2) representative data center with the raised floor cooling scheme was constructed around this multi-scale rack model. Each of the 28 racks dissipated 4.23 kW, giving the data center a power density of 1076 W/m2 (100 W/ft2) based on total floor space. Parametric studies of varying heat loads within the rack and throughout the data center were performed to better characterize the interactions of the sub-rack scale heat generation and the data center. Major results include 1) the presence of a nonlinear thermal response in the upper portion of each rack due to recirculation effects and 2) significant changes in the surrounding racks (up to 10% increase in maximum temperature) observed in response to changes in rack flow rate (50% decrease).


2018 ◽  
Vol 7 (3.12) ◽  
pp. 19
Author(s):  
Amitkumar J. Nayak ◽  
Amit P. Ganatra

Today, there is a generalized standard usage of internet for all.The devices via multiple technologies that facilitates to provide few communication methods to scholars to work with. By forming multiple paths in the data center network, latest generation data centers offer maximum bandwidth with robustness. To utilize this bandwidth, it is necessary that different data flows take separate paths. In brief, a single-path transport seems inappropriate for such networks. By using Multipath TCP, we must reconsider data center networks, with a diverse approach as to the association between topology, transport protocols, routing. Multipath TCP allows certain topologies that single path TCP cannot use. In newer generation data centers, Multipath TCP is already deployable using extensively deployed technologies such as Equal-cost multipath routing. But, major benefits will come when data centers are specifically designed for multipath transports. Due to manifold of technologies like Cloud computing, social networking, and information networks there is a need to deploy the number of large data centers. While Transport Control Protocol is the leading Layer-3 transport protocol in data center networks, the operating conditions like high bandwidth, small-buffered switches, and traffic patterns causes poor performance of TCP.  Data Center TCP algorithm has newly been anticipated as a TCP option for data centers which address these limitations. It is worth noting that traditional TCP protocol.  


Author(s):  
Vaibhav Shukla ◽  
Rajiv Srivastava ◽  
Dilip Kumar Choubey

The leading content provider companies like Google, Yahoo, and Amazon installed mega-data centers that contain hundreds of thousands of servers in very large scale. The current data center systems are organized in the form of the hierarchal tree structure based on bandwidth-limited electronic switches. Modern data center systems face a number of issues like high power consumption, limited bandwidth availability, server connectivity, energy and cost efficiency, traffic complexity, etc. One of the most feasible solution of these issues is the use of optical switching technologies in the core of data center systems. In this chapter a brief description about the modern data center system is presented, and some prominent optical packet switch architectures are also presented in this chapter with their pros and cons.


2020 ◽  
Vol 142 (2) ◽  
Author(s):  
Yogesh Fulpagare ◽  
Atul Bhargav ◽  
Yogendra Joshi

Abstract With the explosion in digital traffic, the number of data centers as well as demands on each data center, continue to increase. Concomitantly, the cost (and environmental impact) of energy expended in the thermal management of these data centers is of concern to operators in particular, and society in general. In the absence of physics-based control algorithms, computer room air conditioning (CRAC) units are typically operated through conservatively predetermined set points, resulting in suboptimal energy consumption. For a more optimal control algorithm, predictive capabilities are needed. In this paper, we develop a data-informed, experimentally validated and computationally inexpensive system level predictive tool that can forecast data center behavior for a broad range of operating conditions. We have tested this model on experiments as well as on (experimentally) validated transient computational fluid dynamics (CFD) simulations for two different data center design configurations. The validated model can accurately forecast temperatures and air flows in a data center (including the rack air temperatures) for 10–15 min into the future. Once integrated with control aspects, we expect that this model can form an important building block in a future intelligent, increasingly automated data center environment management systems.


Sign in / Sign up

Export Citation Format

Share Document